What’s behind America’s Conspiracy Theories?

By Peter McKenzie-Brown[1]

Editor’s Note: Our esteemed contributor Peter McKenzie-Brown was born in the UK, was raised in the United States, and is now a resident of Calgary, Canada.

I am old enough to remember the assassination of John F. Kennedy on November 22, 1962 – one of only half a dozen dates that are clear in my memory. Glued to the television that night, I saw a nightclub operator named Jack Ruby murder Kennedy assassin Lee Harvey Oswald on live TV – the first murder ever broadcast live. It was a bizarre reality for those of us who lived it.

Kennedy’s assassination soon became the subject of widespread debate and spawned numerous conspiracy theories and alternative scenarios. Polls conducted from 1966 to 2004 found that as many as 80 percent of Americans suspected a plot or cover-up. Welcome to the strange world of zombie ideas.

Nobel laureate Paul Krugman defines a zombie idea as “a proposition that has been thoroughly refuted by analysis and evidence and should be dead — but won’t stay dead because it serves a political purpose, appeals to prejudices, or both.” Science notwithstanding, there are those who believe in a flat Earth, a hollow Earth, a geocentric universe or perhaps all three. An entry in the online Skeptic’s Dictionary offers other examples.

Hate speech is a special case in this range of thinking. According to the Cambridge Dictionary, it includes “communications of animosity or disparagement of an individual or a group” because of “group characteristic such as race, colour, national origin, sex, disability, religion, or sexual orientation.” Most liberal democracies – for example, Canada, the UK, France, Germany, The Netherlands, South Africa, Australia, and India – ban hate speech. In many ways, such countries enjoy greater freedom when you weigh the negative liberty to express harmful thoughts against the positive liberty a society enjoys if it disallows the intimidation of minorities.

Some people argue that the purpose of laws that ban hate speech is merely to avoid offending prudes. I cannot think of a single democracy, however, that excises comment from the public square merely because it provokes offense. Rather, hate speech has been so widely proclaimed unlawful because it attacks the dignity of a group.

Among the world’s great democracies, only in the United States is hate speech legal. With few exceptions, America’s Supreme Court has repeatedly ruled that hate speech is constitutionally protected by the first amendment right to free speech. There have been a few exceptions to this. For example, in 1952 the United States Supreme Court upheld an Illinois law making it illegal to publish or exhibit any writing or picture portraying the “depravity, criminality, unchastity, or lack of virtue of a class of citizens of any race, color, creed or religion.” The case provided a legal argument against hate speech by making it possible to sue some offenders for libel. Especially in the world of social media, such a suit would be difficult to apply.

Despite the efforts of Facebook and other well-intentioned sites, hate speech in America now seems to be on the boil. As evidence, Humboldt State University compiled an online visual chart of a series of homophobic, racist, and otherwise prejudiced tweets sent out during an 11-month period; you can take a look at it here. If you are American, it will not make you proud.

Conspiracies: An important offshoot of conspiracy theory is the attempt to explain ordinary events or situations by invoking secret and unseen actions – often politically motivated – of sinister and powerful actors. The term has a pejorative connotation, implying that the appeal to a conspiracy is based on prejudice or insufficient evidence. Conspiracy theories resist falsification and are reinforced by circular reasoning: both evidence against the conspiracy and an absence of evidence for it are re-interpreted as evidence of its truth, whereby the conspiracy becomes a matter of faith rather than something that can be proved or disproved.

Conspiracy theories about moon landings followed conspiracy theories about the assassination of JFK. There were six crewed U.S. landings between 1969 and 1972 – unless you believe the conspiracy theorists who believe the moon landings were hoaxes. The gist of the argument is that the United States lacked the technology to transport humans to the moon and back. They claim that NASA faked the landings in order to make people believe the U.S. had fulfilled President Kennedy’s promise to land a man on the moon before 1970.

What is the evidence? Well, on the lunar landing videos you cannot see stars in the sky. NASA says that’s because the moon’s surface and the astronauts’ suits were so reflective that it was too bright for the camera to pick up the comparatively faint stars. Also, while planting the American flag in lunar soil, the flag appears to wave. With no air in space, how is that possible? NASA says it happened because the astronauts, wanting the flag’s pole to remain upright, moved it back and forth while planting it in the lunar soil. The rotation of the pole caused the flag to move back and forth as if rippling in a non-existent breeze.

Conspiracy theory is essentially the attempt to explain harmful or tragic events by ascribing them to the actions of small, powerful, and secretive groups. One classic example is the one I began this commentary with, the assassination of John F. Kennedy. Such explanations reject the accepted narrative surrounding those events; indeed, the mindset of many theorists is that the official version is further proof of the conspiracy.

Conspiracy theories increase in prevalence in periods of widespread anxiety, uncertainty, or hardship – for example, during wars, economic depressions and in the aftermath of natural disasters like tsunamis, earthquakes, and pandemics. This fact is evidenced by the profusion of conspiracy theories that emerged in the wake of the September 11, 2001 attacks on the United States. Perhaps two thousand  volumes on the JFK assassination have been published, many of them purveying conspiracy ideas. Such notions have been spread through countless other media as well.

Perhaps conspiratorial thinking is driven by a strong human desire to make sense of social forces that are self-relevant, important, and threatening. The content of conspiracy theories can be emotionally powerful, and its alleged discovery can be gratifying to those who hold the associated beliefs. Factual support for conspiracy theories is typically weak, and they are usually resistant to falsification. The survivability of conspiracy theories may be aided by psychological biases[2] and by distrust of official sources. Such distrust did not develop in a vacuum. Starting in 1932 and continuing for 40 years, the U.S. Public Health Service working with the Tuskegee Institute studied the effects of syphilis on 399 African American men. The researchers conducting the Tuskegee syphilis study withheld treatment and allowed more than a hundred men to die, despite the discovery of penicillin as a standard cure in 1947.

At the risk of sounding like a conspiracy theorist myself, that does sound like government conspiring against its own citizens.

An extraordinary commentary on these matters can be found in Kurt Andersen’s best-selling history Fantasyland: How American Went Haywire. His take on the past five American centuries involves a series of skillful deconstructions of myths and fantasies that have evolved since the country’s foundation. He dissects such matters as the Salem witch hunts and Scientology. As the story proceeds, he presents a picture of a country in such steep decline that the founding fathers would have wept into their beards.

“By my reckoning,” he writes in his introduction, reality-based people in the US “are a minority – maybe a third of us but almost certainly fewer than half.” Only a third, he claims, “believe with some certainty that CO2 emissions from cars and factories are the main cause of Earth’s warming[3]. Only a third are sure the tale of creation in Genesis is not a literal, factual account. Only a third strongly disbelieve in telepathy and ghosts.”

“A third believe that our earliest ancestors were humans just like humans today,” he says. That percentage also believe that government has, in league with the pharmaceutical industry, hidden evidence of “natural” cancer cures, and that extraterrestrials have recently visited (or now reside on) Earth.

And the beat goes on. Two-thirds of Americans believe that “angels and demons are active in the world,” he writes. At least half are certain Heaven exists, “ruled over by a personal God” – not an abstract force or universal spirit “but a guy.” More than a third of Americans believe global warming is “a hoax perpetrated by a conspiracy of scientists, government, and journalists.”

“A quarter believe vaccines cause autism,” he says. Twenty-five percent believe in witches. No more than a fifth believe the Bible consists mainly of legends and fables, he says – about the same number who believe that “the media or the government adds secret mind-controlling technology to television broadcast signals” and that U.S. officials “were complicit in the 9/11 attacks.”

These myths are contrary to the growth of science, which has accelerated by leaps and bounds over the centuries of America’s settlement and growth. They will not go away, however. What can be best described as a national paranoia within “the land of the free and the home of the brave” is a loss to the country’s dignity, and to the integrity of the democratic alliances that have played such important roles in the world since the end of the Second World War.

[2] See Gorman SE, Gorman JG: Denying to the Grave: Why We Ignore the Facts that Will Save Us. New York, Oxford University Press, 2016

[3] Although the number of people who agree that human activities are responsible for the Earth’s warming may be increasing.

The Vaccine-Preventable Diseases

Critica Continues the Series About What We Are Vaccinating Against

Part Two: DTaP

With so much news and comment about the hoped-for vaccine against the virus that causes Covid-19, we continue to believe that some general information about vaccines might be helpful. Last month we described the MMR vaccine and the diseases it prevents (measles, mumps, and rubella). This month we tackle another trivalent vaccine—a vaccine that works against three different diseases—the DTaP vaccine. That stands for diphtheria, tetanus, and acellular pertussis (the version for people older than seven years is called TDaP and is sometimes referred to as the booster version).

         The DTaP vaccine (which some may remember as the DPT vaccine) is one of the most important standard vaccinations given to children, adolescents and adults. It has saved the lives of countless people around the world because each of the three illnesses the DTaP vaccine prevents is fully capable of causing death.

         The CDC recommends that the DTaP vaccine be given to babies in four doses between two and 15-18 months. A fifth dose is recommended between four and six years and a dose for adolescents at 11-12 years.  For children who did not receive the vaccine as infants, there is a schedule of recommended catch-up vaccination times in childhood and adolescents. CDC further recommends TDaP as follows:

Pregnant women should get a dose of Tdap during every pregnancy, to protect the newborn from pertussis. Infants are most at risk for severe, life-threatening complications from pertussis.

Adults who have never received Tdap should get a dose of Tdap.

Also, adults should receive a booster dose every 10 years, or earlier in the case of a severe and dirty wound or burn.  Booster doses can be either Tdap or Td (a different vaccine that protects against tetanus and diphtheria but not pertussis). However, see later for some other ideas about this.

Like all vaccines, DTAP, TDaP, and Td can cause pain and soreness at the injection site. More serious reactions range from 1 in 10,000 children to the pertussis portion of the vaccine to one in a million children to the tetanus portion. Death associated with DTaP is extremely rare and generally believed by experts to be coincidental rather than caused by the vaccine.

We will now review each of the three illnesses this vaccination prevents.


         Tetanus is sometimes referred to as “lockjaw” because it usually involves the tightening of the jaw and month due to severe, involuntary muscle contraction. Muscle spasms then spread through the body, which can cause fractures and tendon injuries. People with tetanus can get pneumonia, blood clots in the lungs, nerve injuries, and several other complications and can lapse into a coma. Although most people survive tetanus, recovery can take several months and the mortality rate from severe tetanus is about 50%.

This man has the typical muscle contractions that come with tetanus, which is why it is sometimes referred to as “lockjaw” (source: CDC).

Tetanus usually begins with a wound, often an innocuous one like a simple cut or puncture wound. Most times, the individual does not even feel the need to seek medical treatment. But the bacteria that causes tetanus, clostridium tetani, lurks everywhere and can enter the wound, take hold, and release two powerful toxins that cause the disease. These bacteria are an example of what are called anerobic bacteria, meaning that they live without oxygen and therefore can survive underneath the skin or deep in tissues.

         In the U.S. today, most cases of tetanus and most deaths from tetanus occur in people who have never been vaccinated. The vaccine has dramatically decreased the incidence of tetanus in the developed world, but it still kills many people worldwide in places where vaccination rates are low.

         Because the bacteria that cause tetanus do their damage via the toxins they secrete, the tetanus vaccine is designed to stimulate immunity against the toxin itself. This involves taking the toxin and chemically inactivating it to a form that cannot cause illness. An inactivated toxin is called a “toxoid.” When the tetanus toxoid is injected, the immune system recognizes it as a foreign invader and develops immune memory so that if the vaccinated person is later actually infected with the tetanus bacteria, antibodies will immediately be produced to destroy the toxin and prevent disease.

         This might be a good opportunity to reiterate that despite the fact that children now receive many vaccines according to the recommended CDC schedule, they do not “overwhelm” the immune system and render it unable to fight other diseases. The immune system operates by developing cells and antibodies that are highly specific to each individual disease-causing agent. The antibodies against tetanus toxins involve a miniscule portion of the human immune response capacity and have no implications for the ability to fight other diseases.


         Diphtheria is one of those diseases that is so rare today because of vaccination that some people may think it doesn’t exist any longer. Unfortunately, the bacteria that causes diphtheria, Corynebacterium diphtheriae, does still exist and without vaccination, diphtheria cases will return. Jack remembers an unvaccinated child with diphtheria in the pediatric intensive care unit when he was a pediatric intern in the 1970s. The child developed a dread heart complication and died.

         Diphtheria starts with what seems like a fairly ordinary upper respiratory infection but can quickly progress to difficulty swallowing and obstruction that causes difficulty breathing. As with tetanus, these complications are caused by a toxin released by the bacteria (actually by a virus that infects the bacteria). If that bacteria enters the blood circulation, it can cause a type of heart muscle damage called myocarditis and also nerve damage. About 5-10% of people with diphtheria die, a figure that rises to 20% among children under five and adults over 40.

This child is very ill with diphtheria, with typical sore throat and swollen glands that can progress to difficulty swallowing and breathing and heart muscle and nerve damage. Vaccination has almost entirely eliminated diphtheria in the U.S. (source: CDC).

         Please remember those statistics. One in five children under five will die if they get diphtheria; the vaccine extremely rarely if ever causes death and the serious complications that can occur in about 1 in 10,000 children who get the whole DTaP vaccination are mostly temporary.

         Again like tetanus, the vaccine against diphtheria is aimed against the toxin that causes the illness. A chemically inactivated version of the toxin, called the diphtheria toxoid, is injected and this stimulates memory in the immune system, making it prepared to fight an actual infection if that ever happens. Complications from diphtheria toxoid itself are mild, such as pain at the injection site and occasionally low-grade fever.

Pertussis (Whooping Cough)

         Although once again the incidence of pertussis, or whooping cough, has dramatically decreased since vaccine introduction, it is still a cause of infant death around the world and cases have been increasing steadily in the U.S. since the 1980’s. Half of babies who develop pertussis require care in the hospital and 1-3% under three months die. It is spread in much the same way as the virus that causes COVID-19—by droplets from coughing and sneezing. But unlike COVID-19, a person exposed to pertussis bacteria is virtually guaranteed  (80-90%) to get the illness. Also, unlike COVID-19, pertussis is caused by bacteria, called Bordetella pertussis, and not a virus.

         There are three stages of pertussis infection, each lasting up to six weeks. The first signs that an infant, child, or adult has contracted pertussis is usually a seemingly ordinary upper respiratory infection with stuffy, runny nose and sneezing. This is the catarrhal phase. Next comes the much more serious paroxysmal phase in which a severe cough lasting several minutes develops. The cough is often followed in babies over 6 months by a loud whooping sound. During these episodes of coughing, a baby can be unable to breath and become exhausted. Broken ribs and burst blood vessels in the eyes have occurred because of the paroxysmal coughing. Pneumonia can develop. This is the phase in which deaths can occur. It is clearly not an ordinary cold or even the flu, but a life-threatening event that is painful to watch.

This child with pertussis (whooping cough) has broken blood vessels in his eyes and bruising on his face due to the severe coughing that is part of the illness. In babies, this coughing can lead to periods of lack of oxygen, putting the child’s life at risk (source: CDC).

         For survivors of the paroxysmal stage there is the convalescent stage, with cough lasting for weeks.

         Once again, the culprit in pertussis is not the bacteria themselves but five toxins they produce. The vaccine, therefore, is aimed at producing immunity against the toxins. The newest version of the pertussis vaccine was introduced in 1996 and is called the acellular vaccine. The older version contained killed whole virus and caused a significant number of adverse side effects, including seizures with high fever in one in 1,750 doses. The newer version does not contain whole killed virus but rather only some inactivated bacterial proteins and has a much lower rate of complications (about 1 in 10,000 serious complications, including high fever, seizures, inconsolable crying, and a syndrome in which the child becomes listless and lethargic with poor muscle tone for several hours). But immunity from the acellular vaccine may wane over time. Just as it is currently recommended that adults get tetanus and diphtheria boosters (called Td vaccine) every 10 years, some experts now recommend that pertussis vaccine be added to those boosters in the form of the TDaP vaccine.

         Our hope in bringing you this series to you is that it will remind all of us that vaccine-preventable diseases are rarely seen today only because the vaccines against them are so effective. They are by no means eradicated and unless most members of a community are vaccinated, epidemics of them will certainly return. You will hear people tell stories about horrid complications from vaccines and make claims that they harm the immune system and can even be lethal. We do not deny the fact that in extremely rare cases vaccines may cause serious complications and we have been careful to outline them here.

For every story you hear about a child who has allegedly been harmed by a vaccination (and we say allegedly because many of them turn out to be coincidences), we could tell you a million stories of children who have no serious complications and because of vaccines can live lives free of some of the most horrible disease known.

Wither the CDC

What will happen to our trust in science as a result of crisis mismanagement?

In times of crisis and stress, we can find some emotional relief by clinging to long-trusted anchors. Sometimes these are individuals in our lives whom we count on for wisdom and good advice. An emotional anchor can also be an institution to which we are attached, like a religious congregation or advocacy group. It can even be a more remote agency that we have always trusted to come through for us when an emergency looms.

In the case of the novel coronavirus pandemic, many people look to the U.S. Centers for Disease Control and Prevention (CDC) for expert guidance and action. After all, ever since its founding in the 1940s to deal with malaria in the Southeastern part of the country, the CDC has always seemed to rise to the occasion every time an infectious disease threatened us. CDC was there to help eliminate malaria, yellow fever, and typhoid from the U.S., get control over tuberculosis, prevent swine flu outbreaks in the 1970s, and figure out what is behind Legionnaires disease. In the 1980s when the first mysterious cases of a disease that wiped out part of the human immune system arose, CDC’s work to identify the ways the AIDS virus (HIV) was spread was truly impressive and lifesaving.

That is only a partial list of the many accomplishments that made CDC the world’s leading public health agency, the agency to which Americans and people all over the world turned to keep us safe from emerging pathogens. We felt certain that the CDC operated primarily on the basis of science, with a minimal political agenda.

Blunders and Missteps

That is what makes the recent blunders and missteps by CDC so threatening. An agency with veritable “parent figure” status is letting us down. According to a 2018 article in The Atlantic, the trouble began several years before COVID-19. That year, the Trump-appointed CDC director, Brenda Fitzgerald, turned out to have “eyebrow-raising investments in companies directly related to” her work at CDC, including in four of the five biggest tobacco companies in the world. The Atlantic article pointed out that “One of the centers’ chief public-health objectives is to end smoking. In fact, the only real public-health position on tobacco usage is that it should be eliminated entirely … the CDC’s chief holding even a penny of tobacco stock, let alone a portfolio that includes almost all the major companies, runs counter to that goal.”

The article then goes on to recount threats of cuts to the CDC budget that have rendered the agency “defensive” and compromised its ability to meet international threats like Ebola and Zika. Every year, in fact, the Trump administration has proposed cuts in the CDC budget. These never actually occur because Congress pays no attention, but it cannot help CDC morale to know that the chief executive wants the agency to shrink.

Perhaps this is at least partially responsible for the problems CDC has had in responding to the COVID-19 pandemic. It has been noticed, for example, that its current director and other leaders have been absent from the forefront of press conferences about COVID-19 held by the administration. Isn’t the CDC the one institution above all others we want to hear from when an epidemic sweeps the country? 

Right from the start of the pandemic, instead of taking the lead and using its powerful blend of scientific talent and cutting-edge laboratories, CDC seems to have dropped the ball. It is reported to have delayed testing the first individual in California who developed non-travel-related COVID-19 in February. This delayed recognition that community spread of the virus was already occurring in the U.S. Then, CDC sent out test kits for the virus that turned out to be flawed and yielded spurious results. According to The Washington Post, the problem arose because of contamination of the test kits in the CDC laboratory manufacturing them. At first, CDC refused to acknowledge the problem and it took a recommendation from the Food and Drug Administration (FDA) to convince CDC to stop the process and correct the problem. Waiting for an accurate viral test further delayed our getting on top of the pandemic.article continues after advertisement

Now that accurate tests are available for the virus that causes COVID-19 (called SARS-CoV-2), we should be able to estimate the rate of infection in the U.S. by knowing how many people are currently infected (the numerator) and how many people have been tested (the denominator). Once again, however, CDC has blurred the issue by making an obvious mistake: conflating two tests that give different information about SARS-CoV-2 infection. The test for the current viral infection is the one usually done by swabbing the nose with a special six-inch probe. The sample is then sent to a laboratory where a type of laboratory process called PCR is done to detect the presence of the virus. If it is positive, the person tested is currently infected with SARS-CoV-2. This test seems to be relatively accurate at this point.

The other test detects antibodies to the virus in people who have previously been infected. At the time of writing this commentary, the antibody, or serology, test, is still of questionable accuracy and it is not entirely clear what the presence of antibodies means. In order to be useful in preventing future infection in someone who has already had COVID-19, antibodies must be able to neutralize the virus and must be able to do so over a relatively long period of time, for 2-3 years for example. Even when the accuracy of the serology (antibody) test is optimized and if the detected antibodies do indeed confer long-term protection against reinfection, the test gives very different information than the direct viral test using the swab. One tells you if you are infected now, the other if you were previously infected. Conflating the two tests, as CDC did, inflates the number of tests for infection that are actually being done and produces misleading information. Exactly how the CDC could have made such a basic error is unclear, as is the slow pace they seemed to have taken to correct it.

The Invasion of Politics

CDC issued guidelines for reopening some businesses and other gathering places during the last week of May that are strict but well-informed and reasonable. The White House, however, apparently disagreed with the CDC guidance on reopening houses of worship. Recommendations to limit choir activities were quietly removed from the CDC guidelines and references put in about preserving First Amendment rights. Surely, singing in a choir has got to be one of the ripest places to spread viral infections, with people crowded together forcefully emitting breaths into the ambient air. There doesn’t seem to be any First Amendment issue with recommending that for now places of worship refrain from that part of their worship service—after all, they had been mostly worshiping via teleconference without anyone gathering for several months to this point. Here we see again how politics seems to invade the CDC’s work and overwhelm its dedication to science.

One of us (Jack) was funded by the National Institutes of Health to conduct research on the AIDS virus shortly after that epidemic began in the 1980s. He remembers how he and his colleagues were dazzled by the speed with which the CDC figured out how HIV is transmitted. Years before the first antiretroviral drug to control HIV infection was introduced, CDC epidemiologists saved countless lives by notifying the public that sharing bodily fluids, intravenous drug administration, and infected blood transfusions were the modes of HIV transmission.article continues after advertisement

We need a strong CDC that is devoted to science to resume its lead role in controlling emerging infectious diseases. Without that, one major source of comfort and reassurance during a crisis is severely compromised. We cannot afford to be bereft of our emotional anchors at a time like this.

Woes at the Pharmacy

The Public is Unaware of the Problems that Plague Filling a Prescription

This commentary was suggested to us by Critica advisor Carrie Corboy. Carrie is a pharmacist and Senior Director at Janssen Research and Development, a division of Johnson and Johnson. She focuses on medication adherence, policy, and development and helped us prepare this commentary.

         Once upon a time getting a prescription medication was fairly easy. Your doctor or other qualified healthcare provider wrote you a prescription on a prescription pad, you took it to your local pharmacy, and later picked up your medication. Your pharmacist was usually relaxed and had time to answer questions you might have about how to take the medication and any adverse side effects. Maybe it never really was that easy; it certainly isn’t today.

         We have written before about the problems we have getting facts about our healthcare system in order to make rational decisions to reform it. Here, we address four areas that concern getting medication from a pharmacy: electronic prescribing, prior approval, overworked pharmacists, and prescription benefit managers.

Electronic Prescribing is Mostly a Benefit

         Doctors in most cases no longer “write” prescriptions on those little prescription pads but send them directly to the pharmacy via an electronic prescribing system. Electronic prescribing, or e-prescribing, has many advantages over written prescription. Foremost among them is increased safety: pharmacists no longer have to try to interpret physicians’ illegible handwriting, which cuts down on errors. E-prescribing also reduces the number of phone calls needed between pharmacist and prescriber, thus saving valuable professional time, and allows for efficient recording of each patient’s medication history (although this may only apply to the pharmacy chain where a prescription is filled).It also reduces the problem of stolen prescription pads and forged prescriptions.

Electronic prescribing has many advantages over written prescriptions, although there are still a few problems (image: Shutterstock). 

         But there are also problems with e-prescribing. Prescribers may feel that the available fields on the e-prescribing page do not adequately convey exactly what they want their patients to take and how it is to be taken. To overcome this, the prescriber can write comments in a notes section, but these not infrequently contradict the instructions in the drop-down menu field. Resolving the problem takes up pharmacist time. Electronic prescribing also makes it easy for chain pharmacies to message doctors repeatedly to refill prescriptions; sometimes this is helpful but, as we have been told, other times it involves medications the doctor and patient have decided to discontinue, and the repeated reminders clog up the physician’s inbox and degrade the physician-pharmacist relationship. 

Another problem we have heard is that e-prescribing makes it easier for doctors to refill prescriptions multiple times without seeing the patient in person; it is generally recommended that patients be reevaluated at some reasonable intervals while taking medications for prolonged periods of time. Finally, electronic prescribing software can stop working because of problems with the e-prescriber vendor’s computer system; doctors and patients must wait while those computer problems get fixed.

         Overall, the benefits of e-prescribing seem to clearly outweigh the downsides and we certainly do not advocate a return to written prescriptions. But now that big businesses have taken over the entire prescribing industry, including e-prescribing vendors and chain pharmacies, there is clearly the need for a number of improvements, some of which may have to be legislated.

The Bane of Prior Authorization

         A more pressing problem arises when that prescription hits the pharmacy and a red flag goes up: your insurance company won’t cover the medication unless the doctor who prescribed it gets in touch with the insurer and justifies the prescription. This process is called “prior authorization” (also known as prior approval and pre-certification).  Perhaps it was once a good idea. For example, insurance companies are absolutely right to balk at paying for a brand name medication when a cheaper generic version is available. Asking the doctor to justify why he thinks a brand name drug is necessary is reasonable under those circumstances. In fact, some states have laws that require the dispensing of the equivalent generic product, when one is available,  unless the doctor specifies on the prescription “dispense as written” or “brand medically necessary” or the patient demands the brand name medicine. 

         But the prior authorization procedure has clearly gotten out of hand. Although health insurance companies claim it is a method to improve patient outcomes by ensuring that only safe and necessary medications are prescribed, physicians believe that prior authorization is merely a tool health insurers use to lower their costs. There has been a steady increase in prior authorization requests, placing an ever-increasing burden on physicians to spend lengthy amounts of time on the phone with insurance company employees. A survey by the American Medical Association (AMA) found that medical practices spend an average of two business days per physician on prior authorization requests.

Prior authorization requests are associated with delaying care and harming patients. Prescribers are also frustrated, according to the AMA survey, because many of the drugs for which prior authorization is required are “neither new nor costly.” One physician we know, for example, was asked to do a prior authorization for an antidepressant that has been available as a cheap generic for decades. She was given a list of medications she must give the patient first before the requested antidepressant; none of the drugs on the list were antidepressants and some were drugs for cancer and hypertension.

As another example, let’s take someone suffering with depression for whom a psychiatrist has prescribed the antidepressant bupropion. One insurer may cover the cost of generic bupropion, another might insist that only the brand name drug, Wellbutrin, will be covered, while a third may insist that a totally different antidepressant be tried first. Since there is little evidence that there is much difference among antidepressants in terms of their effectiveness and that what might be more important in antidepressant selection is the patient’s other circumstances (other diseases and medications that might make one antidepressant better than another), this prior authorization is unlikely to have anything to do with what is best for the patient. Nor is it clear that it is related to cost, since three different insurers seem to have a different price for the same drug.  

Worst of all, neither prescriber or patient in this case can predict which version applies to the patient’s own specific insurer, something that will likely not be revealed until either the physician gets a note on her electronic prescribing application that prior authorization is needed or when the patient shows up at the pharmacy and is told the medication cannot be filled. This process clearly needs increased transparency and possible regulatory relief. It should only be used when there is a legitimate potential for improving patient care or lowering costs without harming care. Right now, it seems a tool perversely designed to harass healthcare professionals and subject patients to needless delays for their medicines.

The Pharmacists’ Plight

         Let’s say your doctor has successfully e-prescribed your medication and succeeded in a prior authorization process to get your insurer to cover its cost (minus co-pays, co-insurance, or what’s remaining of your deductible, of course). If you’ve decided not to have your prescription delivered to your home, you are now ready to go to the drugstore to pick it up. Your local pharmacy is likely to be part of a huge national chain, like CVS, Rite-Aid, and Walgreens.[1] As you approach the pharmacy of the larger retail store you likely can hear things like “one pharmacy call” repeating over and over and the phone ringing and you can see the car waiting at the drive up window. Behind the pharmacy counter you will still find someone who has undergone rigorous training to become a pharmacist—in fact, they hold a doctorate in pharmacy– and an expert on the risks and benefits of a wide range of drugs. But that pharmacist is also working under tremendous corporate pressure. 

According to a recent New York Times report pharmacists said “they struggled to keep up with an increasing number of tasks—filling prescriptions, giving flu shots, answering phones and tending the drive-through, to name a few—while racing to meet corporate performance metrics they characterized as excessive and unsafe.” These corporate metrics, like answering the phone within three rings, are generally  unrelated to patient care quality and create an environment of constant multitasking and interruption. This has led to concerns that pharmacists will make more errors filling prescriptions. Writing on behalf of the Pharmacist Moms Group, pharmacist Suzanne Soliman asked that:

..chain pharmacies to publish all of their metrics for calculating pharmacist and technician hours and ultimately error rates. We also encourage pharmacies to publish how many prescriptions are filled each month and how much staff they have so that patients can make an informed decision as to which pharmacies provide adequate staffing to suit their needs.

Pharmacists are highly trained health professionals, but because they now work for large chain pharmacies the demands on them have become severe (image: Shutterstock). 

         In addition to causing practical problems like increasing the prescription fill error rate and taking up valuable pharmacists’ time, corporate demands on pharmacists pose a moral dilemma for them. A recent ethical analysis of pharmacy practice argues that pharmacists work under ethically challenging circumstances, provoking a “moral crisis” in addition to the practical one. We believe that the former is as important as the latter in causing pharmacist distress. 

American corporations are, of course, permitted to keep their corporate secrets. We don’t require that General Motors tell the public what new car designs they are working on or department stores to reveal their number of employees. But if overworked pharmacists are in jeopardy of making errors and becoming burnt out, it becomes a public health issue that the public is entitled to fully understand. Soliman’s plea for more transparency from the chain pharmacies seems urgently needed both to protect the health and safety of pharmacists and of their patients. We may not be able to return all the way to the days of the friendly neighborhood pharmacy, but we have a right to demand that pharmacists have working conditions that do not jeopardize our safety.

PBMs Rule

         At this point, you hopefully have a bottle of the medication your doctor wants you to have in hand and you may even have been able to ask the pharmacist to clarify the instructions on the bottle and explain the drug’s adverse side effects to you. It is now time to pay for the medication, and you hold your breath wondering how much it will be. You’ve been taking this medicine for a long time and you know it is a generic version, but its price seems to change from month to month and from pharmacy to pharmacy. You wish that you could just look up the price of 30 pills of this medicine on the internet. It’s not like buying a car, after all. Shouldn’t somebody be able to just tell you how much it costs?

         In fact, how much drugs cost in the U.S. is a mystery to most of us and a large part of that stems from something called pharmacy benefit managers (PBMs). You may not have known that you have your very own PBM. If someone is covering the cost of your prescription medication, you probably do have a PBM. It is not there to help you get your medication. Rather, a PBM is a corporation that serves as a middle person between insurers and other prescription medication payers on the one hand and drug companies on the other. The insurers essentially hire a PBM to manage the pharmacy benefit portion of your health insurance or Medicare Part D plan. The less the PBM actually spends buying drugs from drug companies, the more money they get to keep. More than 90% of the $450 billion we spend annually on medication is processed by PBMs.

         PBMs are supposed to lower the cost of drugs by using their purchasing power to negotiate better prices from drug companies and by keeping lists called formularies of medications they will cover. Putting only the cheapest versions of drugs on the formulary is another way that PBMs try to hold down costs.  Ideally, these savings would be passed on to consumers.

         But PBMs operate largely in secret and it increasingly turns out that they are maximizing their own profits but not necessarily saving consumers money. One suspect PBM practice is to receive so-called rebates from drug companies. Rebates are funds returned by drug manufacturers to PBMs to make it more attractive for the PBM to list the drug company’s more expensive drugs on their formularies. The PBM shares a portion of the rebate with the health insurer and retains the rest. Ideally, this would create an incentive for the insurer to lower premiums, but it also creates a clear incentive for PBMs to put high cost drugs on the formularies and this translates into higher costs for patients. Interestingly, until very recently, pharmacists were forbidden from telling their patients that they could pay less for a prescription by paying for it outside of their insurance coverage. 

Rebates are big business: the amount of money rebated by pharmaceutical companies to PBMs increased from $39.7 billion in 2012 to $89.5 billion in 2016. The amount of rebate for each drug, however, is usually a corporate secret and can change annually as PBMs negotiate new prices with manufacturers. This means that consumers and the employers providing health insurance are largely kept in the dark about how much a drug actually costs and often cannot find out what they will pay until they are at the pharmacy counter with a credit card in hand.

Last, and possibly worst, the prices of medicines that are made available to the public—called average wholesale price or list price–are quite high  because they are the starting point to which the rebates are applied.  However, for people with no medication insurance, this is exactly the price that they pay for medicines.  Therefore, the most vulnerable people are charged the most.  It thus becomes clear why many patients simply do not fill the prescriptions that they need to survive–they cannot afford them..

         Physician Guy Culpepper wrote on LinkedIn recently that CVS Caremark, the PBM of the CVS Health corporation, was “only covering the Brand name version of some prescription drugs. So if I prescribe the generic, in an effort to save my patient money, CVS Caremark will refuse to cover it.” Even though the real price of the generic drug is less than its equivalent brand name version, in this case the PBM gets a bigger rebate for covering only the brand name version. That rebate, however, is not always passed onto the customer but instead can result in a higher price and more out-of-pocket cost. Culpepper called rebates “kickbacks” and went on to state that “PBMs will pretend these ‘rebates’ will mean lower costs for the payers. But if that were true, PBMs would support transparency to show us just how much money they save us.”

         If rebates save money, that is of course not obvious to consumers, who face ever-increasing prices for medications and higher out-pocket-costs. An analysis of the effects of rebates on the cost of drugs for seniors covered by Medicare Part D showed that they increased both out-of-pocket costs and Medicare drug spending. Once again, the place to start fixing this problem is with greater transparency. Regulators should demand that rebates be publicly disclosed, if not eliminated altogether. Corporate secrets do not seem justified in managing our very expensive healthcare system. We desperately need to reduce healthcare costs so we can extend quality coverage to more people. Rebates turn out to be a very poor method of controlling drug costs and more likely to improve corporate profits instead.

         As we have noted before, Americans spend much more for healthcare than citizens of other high-income countries and get the poorest outcomes in terms of lifespan. Medication spending is one factor that is increasing our exorbitant healthcare cost outlay and several of the elements we have discussed here also have the potential for jeopardizing patient well-being. Our concerns about electronic prescribing do not undermine its many benefits, but prior authorization, overworked pharmacists, and rebates to prescription benefit managers are all detrimental in their present form to the public’s health, create dissatisfied healthcare professionals, and function poorly to control costs. 

In all of these cases, the minimum step to improvement would appear to be legislating increased transparency. We need to know how insurers and their PBMs determine what prescriptions require prior authorization and reform that process so it truly saves money without tying up doctors with endless phone calls and delaying treatment; we need chain pharmacies to publish their work flow metrics and make it clear that driving pharmacists to the point where mistakes are unavoidable is unacceptable; and we need drug companies and PBMs to reveal the complex tangle of drug rebates and, if necessary, regulate this process so that it truly serves to reduce costs instead of forcing people to accept more expensive drugs and higher out-of-pocket costs. Making medication unaffordable to some people is obviously not in the service of improving healthcare.

         A person who needs to fill a medication prescription is likely suffering from a medical problem that is at very least distressing and uncomfortable and at worst painful and even life-threatening. Getting the prescription filled should not add to the patient’s woes, nor should it drive healthcare professionals to distraction or unnecessarily drive up healthcare costs. It is time we examine carefully the whole system by which we get our medication and intervene when necessary at every step of this complex and often mystifying process.

[1] Disclosure, a Critica team member, Jack Gorman, owns a small amount of stock in the Rite Aid Corporation.

The Vaccine-Preventable Diseases

Critica Begins a New Series About What We Are Vaccinating Against

Part One: MMR

Although a large majority of Americans believe that vaccines are safe and effective, a sizable minority are “vaccine hesitant” and worry about the safety of childhood vaccinations. One reason often cited for vaccine hesitancy is that many of the diseases we vaccinate against are rare in the U.S. today, so new parents have never seen them. Without a mental image of what a vaccine-preventable disease looks like and does, people may have trouble appreciating the risk it poses. If the risk of the disease seems, mistakenly as we will see, negligible, then any alleged risk of the vaccine against it becomes more believable.

         Simply telling people that the risk of a serious reaction to a vaccine is one in several thousand does not have the weight of a single anecdote of a child who has suffered from one of the rare serious adverse side effects.  Similarly, just telling people that a vaccine-preventable disease causes “X” number of serious complications may not be convincing. But seeing children with these conditions is highly persuasive.

One member of the Critica team began his medical career in pediatrics and is old enough to have seen cases of measles, mumps, rubella, and diphtheria. For him, the idea that anyone would hesitate to vaccinate against these diseases is hard to grasp. Recently, he was describing how serious the complications of measles can be to a young parent who had never seen a case. She seemed surprised that measles can be, and still is, sometimes fatal. “I don’t think people my age have any clue what the diseases we vaccinate our children against can actually do,” she said.

         And so we decided to begin a series of articles describing some of the vaccine-preventable diseases, starting here with measles, mumps, and rubella, the targets of the MMR vaccine.  That’s the one, of course, that was once accused of causing autism, a thoroughly false claim that we won’t bother getting into here. Nor can we counter vivid anecdotes of serious vaccine reactions with vivid anecdotes about children with these illnesses because we have none: despite the recent and alarming increases in measles cases in the U.S., it is still uncommon and most pediatricians will go through an entire career without ever seeing a single case of measles, mumps, or rubella.

         We do hope, however, that these short descriptions of vaccine-preventable diseases will serve as a reminder that there is a very good reason we vaccine against them: at very least they make young children thoroughly miserable for a week or more and at worst they cause severe and sometimes fatal complications.

Measles Still Kills

         Measles, also known as rubeola, is caused by a remarkable virus. Remember that all the cells in the human body have genes composed of a molecule called DNA and that DNA is transcribed to RNA, which then begins the process of engineering protein production. The measles virus, on the other hand, has no DNA. It contains only RNA and is therefore called an RNA-virus. The same thing is true of the virus that causes AIDS, HIV. That makes it easier for the virus to incorporate itself into an infected person’s own cells and redirect what they do.

         And just like HIV, the measles virus has the uncommon ability to suppress an infected person’s immune system for months and sometimes longer.

That makes people who get measles susceptible to getting infected with other viruses and bacteria, increasing the risk of serious complications.  

         The measles virus lives in an infected child’s nose and throat and is spread to others by coughing and sneezing. It can actually live in the air of a room where someone coughed for as long as two hours. An infected child can spread measles to other people from four days before a rash appears to four days after it appears. Because measles often starts with symptoms that are similar to the common cold, like coughing, runny nose (coryza), fatigue and loss of appetite, a person can spread it before even knowing they have measles. And spreading it is easy; measles is one of the most contagious viruses known. As many as 90% of people who come in contact with someone with measles will get it.

         The rash is what gives away the diagnosis of measles. Shown in the illustration below, It often starts with little white spots, called Koplik spots, inside the cheeks. A day or two later the characteristic red, bumpy, blotchy, and somewhat itchy rash starts on the face and neck and spreads throughout the rest of the body, all the way to the feet. During this week to ten days of unfolding symptoms, a child with measles feels awful. While it is true that you can only get measles once, that one time is memorable for the patient, who spends the time with watery eyes, coughing, sneezing, feeling very weak, and itchy all over. High fever makes it impossible for the child to do much and kills his or her appetite. If you are a parent with a child who has measles, you are likely to say to yourself “even though I know this is going to go away, I wish my child didn’t have to suffer like this.”

A young, very unhappy looking girl with the rash typical of measles (source: Shutterstock).

         Unfortunately, measles doesn’t always “just go away.” While most children recover completely, others have serious complications. Before the measles vaccine became available, measles killed more than 2 million children every year. That number has been drastically reduced since the measles vaccine was introduced, but unvaccinated children can still succumb to measles. In 2018 there were about 140,000 measles deaths around the world, mostly in children under five years old. For every 1000 children who contract measles, two die. Most of those occur when the measles virus infects the lungs, causing pneumonia, or the brain, causing encephalitis. Measles can be especially devastating for children with suppressed immune systems, including children with cancer. Infants who have not yet been vaccinated against measles are also particularly vulnerable to the serious complications of measles infection.

         Against all of this, it seems clear that vaccinating children to protect them from measles is a very good idea. The measles vaccine is a “live attenuated virus” vaccine, meaning that the naturally occurring measles virus is changed by growing it in cell cultures to a form that is incapable of causing measles but still able to stimulate the immune system to make antibodies against it. If at any point a vaccinated person is exposed to the real measles virus, the immune system will then immediately attack the virus and prevent the vaccinated person from getting sick


The leading cause of death from measles is pneumonia. The measles virus can infect the lung, as seen in this illustration where the white patch in the right lung represents the infection (source: Shutterstock). 

As we mentioned earlier, we are not going to debunk here the completely erroneous claims that the MMR vaccine causes autism or any other damage to the brain or immune system. It is true that there are people who have allergies to all kinds of things and that includes the MMR vaccine. But just keep this in mind: fewer than one in one million people have a serious allergic reaction to MMR vaccine; two out of every 1000 children will die from measles. Consider those odds.

Mumps: A Silly Name for a Serious Disease

         The word “mumps” sounds a bit silly and perhaps encourages people to think of the illness as something that is not very serious. That is, of course, a mistake. The mumps virus belongs to the same family of viruses as the measles virus (the paramyxoviruses) and, like measles, it first infects the nose and throat and is spread mainly by coughing and sneezing. Also like measles, mumps is highly contagious.

         Although rarely fatal, mumps infection can cause serious and sometimes permanent damage. The hallmark of the illness is swelling of the salivary glands, especially the parotid gland. This is very uncomfortable and often painful, but almost always resolves in about a week without complications. More serious is infection by the mumps virus of the testicles, a condition called orchitis. About 10 to 20% of post pubertal boys and men get orchitis with mumps. In about half of those cases, there is permanent damage to some of the internal structures of the testicles, resulting in permanent reductions in sperm count and impaired fertility.

This poor child shows the characteristic swelling of the parotid gland, one of the salivary glands, caused by the mumps virus (source: CDC). 

         Another complication of mumps that occurs in about 10% of infected people is meningitis, an infection of the membrane the lines the brain and spinal cord. Although uncommon, mumps can cause temporary hearing loss and sometimes permanent deafness. In fact, it was once one of the leading causes of deafness in children.

         Swollen testicles, impaired fertility, brain infection, and deafness, then, are all some of the potential complications of having mumps. Doesn’t sound as if most parents would want to take the risk of withholding the vaccine, does it?

Rubella: the German measles

         It is true that rubella (the “R” in MMR) is usually a very mild illness. So mild, in fact that many people don’t realize they have it or think it’s just a routine viral infection.  And that is the problem, because the serious threat from rubella infection (also known as “German measles”) occurs when it strikes a pregnant woman. Because rubella can be so subtle and because people who have it are contagious for a week before the typical rash appears, it is not possible to simply try and keep pregnant women away from someone with rubella. The damage can be done by someone who doesn’t know they have it or to a woman who doesn’t yet know she is pregnant.

         The rubella virus belongs to a different family of viruses that measles and mumps (togavirus), but like them is an RNA-virus. Children with rubella may have fever, runny nose, swollen glands, red eyes, and a pinkish rash. It’s also spread by coughing and sneezing. It goes away by itself in a few days and rarely causes any further complications or damage. So why bother with a vaccine?

         The answer is that rubella infection in a pregnant woman, especially if she is in the first trimester, causes the congenital rubella syndrome. The baby whose mother was infected is born with cataracts, deafness, and heart defects. There may also be problems with other organs, growth retardation, and intellectual disabilities. That is, of course, if a baby is born: rubella infection can also cause miscarriage and stillbirth.

This illustration shows viral particles, like the rubella virus, infecting an unborn fetus. The baby will likely have the congenital rubella syndrome (image: Shutterstock). 

         When the same member of the Critica team who started his medical training in pediatrics was in college in 1973 he got rubella. He didn’t feel that sick, but he had a rash all over his body and went to the student health service. He remembers the doctor there saying, “I’ve seen deaf babies because of German measles, you are going into isolation in the hospital.” One interesting thing about this case is that he was told as a child that a viral illness he had was German measles, but that could not have been possible because you cannot get it twice. Rubella infection was confirmed by a blood test when he got it in college and that means what he had as a child had to have been something else. That’s how easy it is to mistake rubella for a different viral infection and hence to unwittingly spread it to a pregnant woman.

         It is true that if you don’t vaccinate your child against rubella, someone else will suffer the consequences: the newborn of an unvaccinated pregnant woman whom that child infects. It would seem to take a very high degree of self-centeredness and cynicism to use that as the excuse for not administering a completely safe vaccine against rubella to every child.

A Reminder

         We will do this reminder about what the diseases do that we don’t see much of anymore because vaccines prevent them again soon. Next up will be the diseases prevented by the DPT vaccine (diphtheria, pertussis, and tetanus). It will be more harrowing stories of children dying or being born with terrible congenital abnormalities. Next time someone tries to tell you that vaccines hurt children, you might show them this commentary and correct that impression: it is the diseases that vaccines prevent that hurt children.

A Statement from Critica

To the extent that we scientists humanize our work, science can take us a bit of the way in helping solve the very long-standing lack of racial justice in the United States. Epidemiology tells us, for instance, that black Americans are disproportionately affected by COVID-19, climate change, and air pollution. Economics research shows how vast income inequality among the races in the U.S. is.. Research in neuroscience, psychology, and sociology help us understand how racial bias forms and the strategies for overcoming it.

If our work at Critica teaches us anything, however, it is that science is never value neutral. Physical laws and chemical processes may be value neutral, but the human beings who study, experiment, and interpret them are never without impact on the work of science.  

         That is why it is imperative that Critica affirm its commitment to the deeper humanization of society in the face of an overwhelming moral wrong. Our Board and Officers are united in condemning what we are learning is widespread police misconduct, something we should have known long before the tragic death of George Floyd. We applaud those who have joined the international protest movement against racial injustice and add our voice to its principles and demands. We pledge to do more within our own organization and ourselves to overcome the crippling biases to which we are all subject, however aware or unaware of them we may be.

         It is of vital importance to us that whenever we are wrong, we acknowledge our mistake and fix it. That policy usually applies to a scientific belief; if we ever back something that is incorrect, we hope we will be ready and willing to correct our error when the evidence steers us in that direction. Overcoming racial prejudice is more difficult because no matter how well-meaning we are, no person is immune to it. Our Board and Officers are entirely composed of white people; we recognize the critical need to be diverse. We commit to understanding how we created a monoracial institution, developing greater awareness of ourselves as racial beings, and becoming a place that is authentically welcoming to people of color who are interested in Critica’s mission and vision.

This organization is founded upon the principles of a book called Denying to the Grave.It is no small irony that this could describe so much of white America’s response to racism. It offers the tools to help people use science in their best interest and for society’s greater good. Our skill set is designed to help us overcome specious, unconscious beliefs such as implicit racial bias. Our moral and scientific lives require that we use them now.

Is Information Overload Hurting Mental Health?

Endless access to information during COVID-19 might be making matters worse.

Not surprisingly given the severe nature of the threat of COVID-19 and the economic downturn we are facing, experts are now predicting that the next “epidemic” will be an epidemic of mental illness and suicide. There are urgent calls to deal with the potential uptick in suicides. Nearly half (45%) of Americans are saying that their mental health has already been affected by the crisis. And while we don’t have a lot of information about the effects of something like the nation-wide physical distancing measures we have seen, we do know from prior research that large-scale disasters and emergencies have led to high rates of mental health problems across the population. 

It seems pretty clear, and understandable, that this crisis may result in increased rates of mental health problems. But from where exactly are these problems arising? Certainly the economy, most likely isolation from physical distancing, and general anxiety about a highly contagious pathogen are factors. Practical barriers such as kids being home from school and thus creating more stressful work situations are also part of it. But there’s another area that people aren’t addressing a lot and that is this notion of “information overload” and how it affects mental health. We should just assume that people are experiencing a sense of information overload right now given how much new information is constantly coming out about COVID-19 and how high stakes most people consider this information to be. 


Source: Shutterstock

So what are the mental health consequences of information overload? It turns out that they may be quite significant and go beyond just a few moments of feeling overwhelmed. Information overload can lead to real feelings of anxiety, feeling overwhelmed and powerless, and mental fatigue. It can also lead to cognitive issues such as difficulty making decisions or making hasty (often bad) decisions. Hasty decision-making comes about because the brain is literally exhausted from trying to process all the information. This is why some researchers prefer the term “cognitive overload” to “information overload.” Processing large amounts of information is often done while multitasking—looking at social media while working, etc. Multitasking in particular has been shown to increase the release of the stress hormone cortisol as well as the hormone adrenaline, which are both associated with the “fight-or-flight” response. 

What can be done to improve mental health in the midst of all this information (or cognitive) overload? We can certainly expect to continue to see a high volume of new information coming out daily, so it’s not realistic to believe we can have any impact on that side of things. However, there are some simple ways individuals can work on limiting their access to the constant deluge of new information. 

  • Schedule times to look at the news: No matter where you get your news from, it’s not a good idea to have a constant stream of it available throughout the day. This approach is most likely to lead to a lot of multitasking, which can increase your anxiety levels considerably. Schedule a time of day and a certain amount of time to look at the news. Set a timer to hold yourself accountable and ensure that you don’t get carried away. Try to choose a time of day that isn’t inherently anxiety-provoking for you for some other reason. If you’re feeling especially anxious on any given day, skip it and don’t look at the news at all. 
  • Turn off notifications on your phone: A lot of people have “push” notifications on their phones that alert them to new headlines. These are almost always a bad idea. Push services are especially associated with information overload because they give us information when we’re not even looking, causing us to multitask which also increases anxiety. Stay focused on what you’re doing and turn the notifications off. 
  • Be careful about checking social media: Recognize that social media is a news source, whether you like it or not. You may think you’re just doing something social and catching up on your friends’ lives but nowadays because so many people publish news articles on social media, you can expect to be confronted with a number of headlines. You should be intentional about checking social media the same way you are about checking the general news headlines. Pick a time of day to do it and set a timer. 
  • Don’t look at your phone before bed: This advice is true all the time but it’s especially important now. The light on your phone can keep you awake if you look at it too close to bedtime and so can reading anxiety-provoking news items or updates from friends and family. Try to avoid your phone for at least 30 minutes prior to going to sleep (and preferably longer). If your friends and family need to get in touch with you, they will call you.

It’s always important to take care of yourself but pay special attention to your level of mental fatigue right now. If you’re feeling exhausted and fuzzy, it may be time to try as hard as you can to take a break from the deluge of new information we’re all facing each day. 

The Cochrane Controversy

And the existential crisis for evidence-based medicine

Evidence-based medicine faces a smoldering, slow-motion crisis as every few months new headlines pop up about a controversy between Peter Gøtzsche and the Cochrane Collaboration. There are both personal and political sides to the story, with magazine stories focusing on palace drama and intrigue, but what may seem like an arcane debate is likely to have significant ramifications on how data are analyzed and what makes a scientific fact a fact.

For over 25 years, patients and experts have come to trust Cochrane Collaboration’s evidence-based systematic reviews to provide authoritative guidance on which medical treatments work and which ones don’t. Cochrane believes so strongly in this type of analysis, their logo is literally a figure from one of their scientific reviews that concluded premature babies were safer if their mothers received a medication prior to birth. Their work has reinforced important treatments we now take for granted, like folic acid can prevent spina bifida. Busy doctors use their summaries routinely, if not daily, and the reviews are widely respected, having “a well‐deserved reputation of excellence,” according to John Ioannidis, an evidence-based medicine expert at Stanford. Of course, despite the best efforts of the reviewers, uncertainties remain, particularly related to the availability of high-quality evidence, but these reviews are, arguably, the closest thing medicine has to a gold standard.

The crisis has a number of angles that border on insider gossip. Briefly, Cochrane has behavior guidelines for members to ensure they do not compromise the perceived neutrality of the collaboration. Gøtzsche, a founding member but well known “firebrand” for his rigid views on evidence quality, bias and transparency, has been accused of repeatedly violating those guidelines and was voted off the board. Gøtzsche and his defenders argue his efforts are in defense of science, pushing for the highest standards of evidence and trying to ensure Cochrane reviews lack bias through transparency. His methods may be unorthodox or impolite, but they should not be silenced. 

Juxtaposed on other internal debates, like whether Cochrane’s business model should be top-down or grassroots, the disagreement has unfortunately become personal and acrimonious with lawsuits and factions. (See here, here, or here for more on the controversy.) With the media’s focus on individual narratives, Gøtzsche is portrayed either as heroic rogue scientist, fighting against subtle biases that inevitably arise when a growing proportion of our medical data come from capitalistic enterprise, or as an out-of-touch champion of a halcyon era of evidence-based medicine where rigorously examined numbers and data could tell the whole story, regardless of context. 

These simplistic narratives of the idealist vs more practical realists only touch on the larger philosophical debates at issue (that I’ll address in more detail below). But, most importantly, there is also a key question that touches the core of science and epistemology, the answer to which is likely to affect how science should be assessed and adjudicated: What constitutes scientific evidence and how do you weigh it?

The crisis at Cochrane is inseparable from increasing questions about the problems with evidence-based medicine (EBM) in practice. Clearly, EBM has seen a meteoric rise. Resistant to “eminence-based medicine” – a tongue-in-cheek way to describe medicine practiced by experienced physicians with little evidence behind their decisions – the movement to rely primarily on evidence for clinical decision-making is not even 30 years old

But, in what could be argued is a mid-life crisis, in 2014, prominent evangelists of EBM noted its unintended foibles and suggested reform. The debate, then, can be framed as one of diagnosing the problem and deciding what to do about it. Cochrane relies on what is widely agreed is the highest quality evidence, the randomized controlled trial (RCT), published in peer-reviewed journals. But, some argue, that data are often biased, both in individual instances of RCTs, and in the fact that most RCT data comes from industry-funded sources. 

Specifically, Gøtzsche’s critics contend that his views about the safety of certain medications like the HPV vaccine or antidepressants: “consistently expresses the most extreme views in the most dramatic and misleading way.” As he told Undark, Gøtzsche argues that industry interests dominate the available evidence, and Cochrane’s reliance on it makes it “a servant to the industry.” With the cost of RCTs inexorably rising, Gøtzsche’s concern seems well placed, as only industry has the financial means to undertake the large, expensive trials that Cochrane uses for their analyses.  

Yet critics also argue that RCTs are stilted: their experimental conditions are so controlled as to be artificial, bearing little resemblance to actual clinical practice. While “positive” trials can show statistical significance, they may not have much meaningful clinical relevance, a conundrum neatly captured in Richard Lehman’s whimsical piece about whether to prescribe spironolactone for an elderly patient with many co-morbid diseases, where the physician recounts the concerns over analyzing complicated data from various clinical trials only to realize the patient is most concerned about quality of life rather than living as long as possible. Therefore, reformists argue, RCTs are important, but must be weighed not only against other types of evidence, but against the individual circumstances and context a patient may be facing at the time of their illness. These EBM reformists argue now against the unthinking application of guidelines, checklists or algorithms, and support individualization of care if necessary. This has given rise to a now well known term: the tyranny of the RCT. There are unintended consequences of relying too heavily on RCT evidence, with scientists questioning decades of dental dogma about flossing a prominent recent example.

As one BMJ article by Jefferson and Jørgensen points out, the process of publishing RCTs can lead to “unfathomable bias” simply because of the massive distillation required to turn the thousands of pages of details needed to run an RCT into a tight 10-page journal article. Such inevitable compressions and distortions lead the authors to argue that Cochrane may need to ignore such journal articles: “By the law of Garbage In Garbage Out, whatever we produce in our reviews will be systematically assembled and synthesised garbage with a nice Cochrane logo on it,” they wrote.

Jefferson and Jørgensen argue we should index all trial information to avoid the bias-through-distillation problem. This labor-intensive project would allow more people to peer into that specialized circle of the clinical trial so we can view them all together in context. This is an important step, but does not seem to get at a major issue of concern shared by them and Gøtzsche: the preponderance of evidence comes from industry-funded studies. 

And that gets to the crux of the debate, but one that scientists seem to be talking around. If work done by the Cochrane Collaboration either to index or review high-quality trials is a public good and worthy of funding, then the data to produce those reviews should also be considered a public good. Having industry take part in clinical trials is not an issue as long as there is a significant proportion of data released with minimal concerns about conflict of interest. Yet, as the proportion of trial data coming from industry has increased, there hasn’t been a concomitant outcry for the production of more publicly funded data. Decades ago, government support for clinical trials was instrumental in the famous cases of polio and influenza vaccines. Rather than super-basic research, should governments fund more applicable science, as suggested in this New Atlantis article on Saving Science?

Moreover, despite Gøtzsche’s efforts to push Cochrane away from bias, the question is not the foolhardy errand to try to do away with all bias, but, after minimizing it to the greatest extent possible, how to be transparent and reflective about whatever biases surface in the data, as Greenhalgh and colleagues note.

RCTs and EBM have revolutionized medicine. But how scientists move past these known flaws will determine how we weigh different types of evidence. It’s unfortunate that such a weighty issue is getting lost in lurid headlines. 


Using This Time to Focus on Your Health in Healthy Ways

Critica COO Catherine DiDesidero posted the note below on her Instagram feed and we felt that everyone should see it:

I’m pretty convinced that the obesity epidemic in America is a large contributor to the vast and rapid spread of Covid-19. This article suggests I’m correct. Indeed, obesity has been shown to be a risk factor for serious complications from infection with the novel coronavirus that causes COVID-19. Obesity is one of those blanket terms that can be applied to various aspects of health.

Another epidemic America was suffering before all of this was the one where we back burner our health and general well-being in favor of other responsibilities – also seconded by this article in relation to blood pressure and hypertension (one doesn’t need to be classified as “obese” to have underlying conditions). One thing that I think this pause has gifted is time– to start creating new and healthy habits. And time to realize that everything else is secondary…HEALTH IS MOST IMPORTANT.

You can’t take care of anyone else until you care for yourself first. Providing for a family is much easier to do from a healthy state of mind and body. So get started now. Learn quick and healthy recipes to integrate into your diet. Find a class or trainer and build your workout into your routine. We can control a lot of these risk factors with a more proactive approach. Taking care of yourself is the best protection you can hope for. Stay home. Stay safe. Be healthy.


The evolution of a connected world

By Peter McKenzie-Brown

Editor’s Note: How did we get to this point in which cell phones are ubiquitous and dominate our lives? Here, Peter McKenzie-Brown reminds us that there are some downsides to constant cell phone use and then reviews for us the fascinating history of how we have become progressively “wired.”

The Canadian city I live in, Calgary, got top marks in the last report from The Economist Intelligence Unit (EIU), which ranked it as the most liveable city in North America, and number five in the world – after Vienna, Austria; Sidney and Melbourne, Australia; and Osaka, Japan. Two other Canadian cities, Vancouver and Toronto, were also in the top ten.

The EIU index ranks the world’s 140 largest cities on 30 factors bunched into five categories. These include political and economic stability, for example, health care, culture and environment, education and infrastructure. In the most recent report, Vienna topped the list. It ranked ahead perfect 99.1 out of 100, putting it just ahead of Melbourne. Sydney and Osaka. Then came Calgary. According to the EIU report, “higher crime rates and ropey infrastructure pull some bigger cities like London, New York and Paris down the league table, despite their cultural and culinary attractions.”

Yet as I walk the streets of this city, or get on public transit, I’m always amazed to observe that endless majorities of people on sidewalks, on trains and buses, in restaurants, and even in parks seem to spend endless hours on the communication devices that seem to dominate their lives. Personally, I can’t imagine spending a walk in my favourite park staring at an iPhone – in my case, a device I parked some years ago. The more I watched this behaviour, the odder it seemed. I quickly found a great deal of online research that worried about our collective online obsession.

Sure, it’s easy to pass a cell phone addiction off as something that comes with the technological advances of the last 20 years; however, with cell phones come real risks. For example, a study at Temple University’s College of Health Professions and Social Work compared the volume of text messages college students with the amount of neck and shoulder pain they experienced. The result was no surprise: The more you text, the more pain you are likely to experience.

There’s also the matter of dangerous driving. America’s Insurance Institute for Highway Safety not surprisingly found that drivers who use cell phones while behind the wheel were four times more likely to have an accident than those who did not. What’s more, using a hands-free device instead of a hand-held phone doesn’t improve safety.

 Finally, there are sleep disturbances. Using a cell phone before bed can keep you awake, according to a study conducted by Wayne State University School of Medicine in Detroit and researchers in Sweden. Conducted over an 18-month period, the study involved 35 men and 36 women between the ages of 18 and 45. Their conclusion? The radiation emitted by cell phones disrupts sleep patterns.

I’ve always been interested in how things develop, so I started investigating the origins of today’s intensely interconnected world – one that hosts both promise and risk. The balance of this paper shows how this took place nearly two centuries to evolve.

How did this begin?

The world began to get wired with the invention (in present-day Germany in the 1840s) of the electrical telegraph. These point-to-point text messaging systems used coded pulses of electric current to transmit text messages over ever-longer distances.

Albeit brief and ineffectual, the first scientific attempt to illustrate the speed and power of electricity dates back to a 1764 experiment by Jean-Antoine Nollet. A French physicist during the Enlightenment, he had also been a deacon in the Catholic church and was thus able to call on former colleagues to help him with his work.

To test the speed of electrical transmission, Nollet gathered hundreds of lengths of iron wire, roughly two hundred monks, and an array of Leyden jars. These primitive devices, which stored static electricity, were the discovery of a Dutch physicist at the University of Leiden in 1746 – hence the name. (Independently, German inventor Ewald Georg von Kleist had developed a similar device the year before.) The French monks distributed themselves in a circle a mile or so in circumference, each holding a length of wire in each hand to link himself to compatriots on his right and left. Without a word of warning, Nollet discharged the contents of the batteries into the wire, sending an electric shock through the chain of monks.

Nollet was unable to successfully measure the actual speed of electricity with the experiment since all the monks reacted to the electric shock simultaneously. His notes recorded that transmission speed of electricity was extremely high and appeared to transverse the circle of monks almost instantaneously. To entertain the king of France, he later conducted the same “experiment” on 180 French soldiers.


The Nollet experiment may have planted the seed for the concept of telegraphy—the transmission of data over long lengths of wires using only electrical impulses. However, it has nothing to do with the origin of the word “telegraph,” which originally did not involve wires at all. The term originated with Frenchman Claude Chappe, but the system he developed was mechanical rather than electrical. His invention was a system of semaphores, with people signaling with flags from tower to tower. Napoléon used the system to coordinate his empire and army, and other European states copied the system.

Today, the word telegraph suggests dots and dashes transmitted by Morse code over long-distance cables which ultimately yield telegrams. But the word was originally a reference to Chappe’s semaphore system and used no electricity at all. There were moveable arms on the top of the towers and operators could use telescopes to read these mechanical messages from other towers. Thus, these towers could be quite a distance apart. This system could transmit messages quickly and efficiently, so the French government built a national network. The French word télégraphe comes from the French télé (at a distance) and graphe (writing) – thus, “far writer.”

Before moving on to electrical message transmission, it is worth noting that the Leyden jar did contribute significantly to serious science. Around the time American was gaining independence, American rebel and diplomat Benjamin Franklin used one to show that lightning is an electrical discharge.

Franklin called a series of linked Leyden jars, which can store greater electric charges, a “battery.” Unlike modern-day batteries, no matter how many of these devices were linked together, they released all their energy in a single burst.

That said, this early electrical storage system did not entirely end up on history’s junk heap. In miniaturized form, a descendent of the Leyden jar is hard at work in most of today’s electronic products. Today, it’s called a capacitor. Charged by an electrical current, these devices still release their charge all at once. Their instant charge/discharge operates the flash attachments on cameras, for example, and tuning dials on radios. They also control loudspeakers, making music audible and measured, rather than an incomprehensible burst of sound.

Electrical Telegraphy

In the 1790s, at the tail end of the Enlightenment, an argument about electricity between two Italian scientists—Luigi Galvani and Alessandro Volta—led to Volta inventing the first true battery. For the first time, electricity could be put to continuous work. This led to experiments using steady electrical currents for message transmission.

As we have seen, the Napoleonic empire desperately needed a new, high-speed communications system – preferably one that used wires and could instantly reach places that were beyond line of sight. These systems did not develop until Volta’s battery became widely known, however – well after the war was over.

Inventors came up with many schemes for encoding information electrically. As is often the case, the most successful approach was the simplest. The telegraph code still bears the last name of its American inventor, Samuel Morse, who developed the system in 1838. The system required a single wire, which made the system simple and less expensive than others. In addition, Morse’s approach reduced the complexity of the technology by putting it into the hands – literally – of the operator, who needed to learn to both send and receive Morse code.

In the beginning, there was a widespread view that the dot-and-dash system would be too difficult, but it turned out to be a bit like learning to play a musical instrument. Not everybody, but some became quite skilled. Once they mastered the system, they could quickly and easily send and receive messages.

By the second half of the nineteenth century, nations across the world had created commercial telegraph networks, with local telegraph offices in most cities and towns. These systems enabled people to send messages telegrams to anyone, for a fee. Although an 1854 attempt failed, telegraph companies were ultimately successful in laying submarine telegraph cables, which created a system of rapid communication between continents. By 1865, the Morse system was becoming the standard for domestic and international communications in Europe and much of the Americas, and in distant parts of the European empires.

These networks permitted people and businesses to transmit messages across continents and oceans almost instantly, with widespread social and economic impacts. Telegraphs are still in use, although teletype networks have been replacing them for a hundred years.

Canada’s Telephone?

Did Canada really invent the telephone? We Canucks think so, and the first long-distance tests certainly took place on Canadian soil. That said, inventor Alexander Graham Bell – a Scot who had migrated to Canada with his family as a child – did his work in Boston, became an American citizen and was one of the founders of media giant American Telephone and Telegraph, now known as AT&T.

It was in Boston that the telephone – it did not yet have a name – first showed signs of life. On March 10, 1876, Bell used the instrument in Boston to call his colleague, Thomas Watson who was in another room and out of earshot. He famously said, “Mr. Watson, come here – I want to see you” and Watson soon appeared at his side.

Continuing his experiments during a visit to the Bell homestead in Brantford, Bell brought home a working model of the device. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant six kilometres distant, indicating that he was ready. He then made a telephone call via telegraph wires and heard faint voices replying.

The following night, he amazed guests as well as his family with a call from his parents’ home to the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, a town in Ontario thirteen kilometers distant. Often called the world’s first long distance call, this test demonstrated that the telephone could work over long distances, and Canada now recognizes the Bell homestead as a national historic site.

Commercialization of the telephone soon began. In the earliest days, instruments were paired for private use between two locations. Users who wanted to communicate with persons at multiple locations had as many telephones as necessary for the purpose.

Later telephones took advantage of the exchange principle which developed for telegraph networks. Each telephone was wired to a telephone exchange established for a town or area. For communications outside this exchange area, trunks were installed between exchanges. Networks were designed in a hierarchical manner until they spanned cities, countries, continents and oceans.

Going Wireless

These developments were soon superseded by other technologies that transformed human connectivity. Known to our grandparents and as “the wireless,” the radio transmitted signals through the transmission of signals by the modulation of electromagnetic waves.

In 1895, Italian inventor Guglielmo Marconi became the first person to “cut the cord” of electronic communications, sending wireless signals across the Italian countryside. In 1900 he patented this invention, calling it tuned, or syntonic, telegraphy. We call it the radio, and it quickly broke new ground.

Italian-born Marconi studied physics and became interested in the transmission of radio waves after learning of the experiments of the German physicist Heinrich Hertz. He began his own experiments in Bologna in 1894 and soon succeeded in transmitting a radio signal which he could receive three kilometres away.

Receiving little encouragement for his experiments in Italy, he went to England two years later. He formed a wireless telegraph company and was soon sending transmissions from distances 15 kilometers and more. In 1899, he transmitted a signal across the English Channel. That year, he also equipped two U.S. ships to report to New York newspapers on the progress of the America’s Cup yacht race. That successful endeavour aroused widespread interest in Marconi and his wireless company.

To put the wireless in perspective, electrical telegraphy had sped up the spread of information from a few days or weeks or months to a few hours. Reporters could receive the news, write it up, send it to print in a newspaper, and people would read about it, perhaps, half a day later. As the radio developed, numerous people could hear news broadcasts at the same time. As radio networks developed their programming, it became the most powerful medium yet invented for spreading information and shaping public opinion.

Marconi’s greatest achievement came on December 12, 1901, when one of his wireless systems at Cornwall, England, successfully transmitted a message (simply the Morse-code signal for the letter “s”) across the Atlantic to St. John’s, Newfoundland – then, a British colony; today, a Canadian province. That transatlantic transmission won him worldwide fame.

Ironically, detractors of the project had been correct when they declared that radio waves would not follow the curvature of the earth: Marconi’s transatlantic radio signal had indeed been headed into space but bounced off ionosphere and back toward Earth. Much remained to be learned about the laws of the radio wave and the role of the atmosphere in radio transmissions, and Marconi played a leading role in radio development and innovations for three more decades.

Experiments in television development began in the 1920s, but The Great Depression and World War II slowed development. Once the war was over television ownership exploded.

From Miniaturization to Wi-Fi

In the post-war world, Japan led the miniaturization of electronics, and in the mid-1950s created tiny, wireless radios as small as your hand. Bearing the word “transistorized” on their body, they were the first electronic devices in North America to also bear a Sony logo.

By an odd series of coincidences, these devices were first exported to Canada in the summer of 1955, and there they created quite a stir. They amazed their new owners, who were accustomed to furniture-sized radios plugged into an outlet. North America learned about them from the excitement of those lucky enough to own one.

From those days on, miniaturization has been the trend for communications devices – a trend that began to accelerate in the 1990s, with the rapid growth of the World Wide Web. Today our iPods and other devices fit easily into our pockets, and they make functions available that once required a telephone, a camera, a movie camera, a television, paper calendars, accounting spreadsheets, books, publishing houses – the list goes on, and on, and on. The social media that are part of this panoply are relatively new phenomena; to have a post go viral is many a player’s ultimate dream.

A family of wireless networking technologies commonly used for local area networking of devices and Internet access, Wi‑Fi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term to products that meet its technical protocol.

From the beginning, the primary goal of this organization was that Wi-Fi devices work across all vendors and, as new devices become available, be “backward compatible” in the sense that they would continue to work with older devices – including the original devices made according to this protocol. In this way, the alliance responded to growing demand for Wi-Fi with new technologies and programs that increase connectivity, enhance roaming, and – the organization’s wording – “improve the user experience.” Members of the Wi-Fi alliance now produce desktop and laptop computers, smartphones and tablets, smart TVs, printers, digital audio players, automobile scanners, automobiles, monitors, drones, facial recognition cameras and countless other devices that would have been largely unimaginable at the beginning of this millennium.

In the years since the Enlightenment, electrical devices from telegraphy through radio and radar have played key roles in every aspect of our lives, both during times of peace and war. As I write these words, Wi-Fi devices are moving into a fifth generation of development. The lesson from that reality, perhaps, is that electrical devices are playing ever-more-subtle roles in an electrifyingly complex new world.