Woes at the Pharmacy

The Public is Unaware of the Problems that Plague Filling a Prescription

This commentary was suggested to us by Critica advisor Carrie Corboy. Carrie is a pharmacist and Senior Director at Janssen Research and Development, a division of Johnson and Johnson. She focuses on medication adherence, policy, and development and helped us prepare this commentary.

         Once upon a time getting a prescription medication was fairly easy. Your doctor or other qualified healthcare provider wrote you a prescription on a prescription pad, you took it to your local pharmacy, and later picked up your medication. Your pharmacist was usually relaxed and had time to answer questions you might have about how to take the medication and any adverse side effects. Maybe it never really was that easy; it certainly isn’t today.

         We have written before about the problems we have getting facts about our healthcare system in order to make rational decisions to reform it. Here, we address four areas that concern getting medication from a pharmacy: electronic prescribing, prior approval, overworked pharmacists, and prescription benefit managers.

Electronic Prescribing is Mostly a Benefit

         Doctors in most cases no longer “write” prescriptions on those little prescription pads but send them directly to the pharmacy via an electronic prescribing system. Electronic prescribing, or e-prescribing, has many advantages over written prescription. Foremost among them is increased safety: pharmacists no longer have to try to interpret physicians’ illegible handwriting, which cuts down on errors. E-prescribing also reduces the number of phone calls needed between pharmacist and prescriber, thus saving valuable professional time, and allows for efficient recording of each patient’s medication history (although this may only apply to the pharmacy chain where a prescription is filled).It also reduces the problem of stolen prescription pads and forged prescriptions.

Electronic prescribing has many advantages over written prescriptions, although there are still a few problems (image: Shutterstock). 

         But there are also problems with e-prescribing. Prescribers may feel that the available fields on the e-prescribing page do not adequately convey exactly what they want their patients to take and how it is to be taken. To overcome this, the prescriber can write comments in a notes section, but these not infrequently contradict the instructions in the drop-down menu field. Resolving the problem takes up pharmacist time. Electronic prescribing also makes it easy for chain pharmacies to message doctors repeatedly to refill prescriptions; sometimes this is helpful but, as we have been told, other times it involves medications the doctor and patient have decided to discontinue, and the repeated reminders clog up the physician’s inbox and degrade the physician-pharmacist relationship. 

Another problem we have heard is that e-prescribing makes it easier for doctors to refill prescriptions multiple times without seeing the patient in person; it is generally recommended that patients be reevaluated at some reasonable intervals while taking medications for prolonged periods of time. Finally, electronic prescribing software can stop working because of problems with the e-prescriber vendor’s computer system; doctors and patients must wait while those computer problems get fixed.

         Overall, the benefits of e-prescribing seem to clearly outweigh the downsides and we certainly do not advocate a return to written prescriptions. But now that big businesses have taken over the entire prescribing industry, including e-prescribing vendors and chain pharmacies, there is clearly the need for a number of improvements, some of which may have to be legislated.

The Bane of Prior Authorization

         A more pressing problem arises when that prescription hits the pharmacy and a red flag goes up: your insurance company won’t cover the medication unless the doctor who prescribed it gets in touch with the insurer and justifies the prescription. This process is called “prior authorization” (also known as prior approval and pre-certification).  Perhaps it was once a good idea. For example, insurance companies are absolutely right to balk at paying for a brand name medication when a cheaper generic version is available. Asking the doctor to justify why he thinks a brand name drug is necessary is reasonable under those circumstances. In fact, some states have laws that require the dispensing of the equivalent generic product, when one is available,  unless the doctor specifies on the prescription “dispense as written” or “brand medically necessary” or the patient demands the brand name medicine. 

         But the prior authorization procedure has clearly gotten out of hand. Although health insurance companies claim it is a method to improve patient outcomes by ensuring that only safe and necessary medications are prescribed, physicians believe that prior authorization is merely a tool health insurers use to lower their costs. There has been a steady increase in prior authorization requests, placing an ever-increasing burden on physicians to spend lengthy amounts of time on the phone with insurance company employees. A survey by the American Medical Association (AMA) found that medical practices spend an average of two business days per physician on prior authorization requests.

Prior authorization requests are associated with delaying care and harming patients. Prescribers are also frustrated, according to the AMA survey, because many of the drugs for which prior authorization is required are “neither new nor costly.” One physician we know, for example, was asked to do a prior authorization for an antidepressant that has been available as a cheap generic for decades. She was given a list of medications she must give the patient first before the requested antidepressant; none of the drugs on the list were antidepressants and some were drugs for cancer and hypertension.

As another example, let’s take someone suffering with depression for whom a psychiatrist has prescribed the antidepressant bupropion. One insurer may cover the cost of generic bupropion, another might insist that only the brand name drug, Wellbutrin, will be covered, while a third may insist that a totally different antidepressant be tried first. Since there is little evidence that there is much difference among antidepressants in terms of their effectiveness and that what might be more important in antidepressant selection is the patient’s other circumstances (other diseases and medications that might make one antidepressant better than another), this prior authorization is unlikely to have anything to do with what is best for the patient. Nor is it clear that it is related to cost, since three different insurers seem to have a different price for the same drug.  

Worst of all, neither prescriber or patient in this case can predict which version applies to the patient’s own specific insurer, something that will likely not be revealed until either the physician gets a note on her electronic prescribing application that prior authorization is needed or when the patient shows up at the pharmacy and is told the medication cannot be filled. This process clearly needs increased transparency and possible regulatory relief. It should only be used when there is a legitimate potential for improving patient care or lowering costs without harming care. Right now, it seems a tool perversely designed to harass healthcare professionals and subject patients to needless delays for their medicines.

The Pharmacists’ Plight

         Let’s say your doctor has successfully e-prescribed your medication and succeeded in a prior authorization process to get your insurer to cover its cost (minus co-pays, co-insurance, or what’s remaining of your deductible, of course). If you’ve decided not to have your prescription delivered to your home, you are now ready to go to the drugstore to pick it up. Your local pharmacy is likely to be part of a huge national chain, like CVS, Rite-Aid, and Walgreens.[1] As you approach the pharmacy of the larger retail store you likely can hear things like “one pharmacy call” repeating over and over and the phone ringing and you can see the car waiting at the drive up window. Behind the pharmacy counter you will still find someone who has undergone rigorous training to become a pharmacist—in fact, they hold a doctorate in pharmacy– and an expert on the risks and benefits of a wide range of drugs. But that pharmacist is also working under tremendous corporate pressure. 

According to a recent New York Times report pharmacists said “they struggled to keep up with an increasing number of tasks—filling prescriptions, giving flu shots, answering phones and tending the drive-through, to name a few—while racing to meet corporate performance metrics they characterized as excessive and unsafe.” These corporate metrics, like answering the phone within three rings, are generally  unrelated to patient care quality and create an environment of constant multitasking and interruption. This has led to concerns that pharmacists will make more errors filling prescriptions. Writing on behalf of the Pharmacist Moms Group, pharmacist Suzanne Soliman asked that:

..chain pharmacies to publish all of their metrics for calculating pharmacist and technician hours and ultimately error rates. We also encourage pharmacies to publish how many prescriptions are filled each month and how much staff they have so that patients can make an informed decision as to which pharmacies provide adequate staffing to suit their needs.

Pharmacists are highly trained health professionals, but because they now work for large chain pharmacies the demands on them have become severe (image: Shutterstock). 

         In addition to causing practical problems like increasing the prescription fill error rate and taking up valuable pharmacists’ time, corporate demands on pharmacists pose a moral dilemma for them. A recent ethical analysis of pharmacy practice argues that pharmacists work under ethically challenging circumstances, provoking a “moral crisis” in addition to the practical one. We believe that the former is as important as the latter in causing pharmacist distress. 

American corporations are, of course, permitted to keep their corporate secrets. We don’t require that General Motors tell the public what new car designs they are working on or department stores to reveal their number of employees. But if overworked pharmacists are in jeopardy of making errors and becoming burnt out, it becomes a public health issue that the public is entitled to fully understand. Soliman’s plea for more transparency from the chain pharmacies seems urgently needed both to protect the health and safety of pharmacists and of their patients. We may not be able to return all the way to the days of the friendly neighborhood pharmacy, but we have a right to demand that pharmacists have working conditions that do not jeopardize our safety.

PBMs Rule

         At this point, you hopefully have a bottle of the medication your doctor wants you to have in hand and you may even have been able to ask the pharmacist to clarify the instructions on the bottle and explain the drug’s adverse side effects to you. It is now time to pay for the medication, and you hold your breath wondering how much it will be. You’ve been taking this medicine for a long time and you know it is a generic version, but its price seems to change from month to month and from pharmacy to pharmacy. You wish that you could just look up the price of 30 pills of this medicine on the internet. It’s not like buying a car, after all. Shouldn’t somebody be able to just tell you how much it costs?

         In fact, how much drugs cost in the U.S. is a mystery to most of us and a large part of that stems from something called pharmacy benefit managers (PBMs). You may not have known that you have your very own PBM. If someone is covering the cost of your prescription medication, you probably do have a PBM. It is not there to help you get your medication. Rather, a PBM is a corporation that serves as a middle person between insurers and other prescription medication payers on the one hand and drug companies on the other. The insurers essentially hire a PBM to manage the pharmacy benefit portion of your health insurance or Medicare Part D plan. The less the PBM actually spends buying drugs from drug companies, the more money they get to keep. More than 90% of the $450 billion we spend annually on medication is processed by PBMs.

         PBMs are supposed to lower the cost of drugs by using their purchasing power to negotiate better prices from drug companies and by keeping lists called formularies of medications they will cover. Putting only the cheapest versions of drugs on the formulary is another way that PBMs try to hold down costs.  Ideally, these savings would be passed on to consumers.

         But PBMs operate largely in secret and it increasingly turns out that they are maximizing their own profits but not necessarily saving consumers money. One suspect PBM practice is to receive so-called rebates from drug companies. Rebates are funds returned by drug manufacturers to PBMs to make it more attractive for the PBM to list the drug company’s more expensive drugs on their formularies. The PBM shares a portion of the rebate with the health insurer and retains the rest. Ideally, this would create an incentive for the insurer to lower premiums, but it also creates a clear incentive for PBMs to put high cost drugs on the formularies and this translates into higher costs for patients. Interestingly, until very recently, pharmacists were forbidden from telling their patients that they could pay less for a prescription by paying for it outside of their insurance coverage. 

Rebates are big business: the amount of money rebated by pharmaceutical companies to PBMs increased from $39.7 billion in 2012 to $89.5 billion in 2016. The amount of rebate for each drug, however, is usually a corporate secret and can change annually as PBMs negotiate new prices with manufacturers. This means that consumers and the employers providing health insurance are largely kept in the dark about how much a drug actually costs and often cannot find out what they will pay until they are at the pharmacy counter with a credit card in hand.

Last, and possibly worst, the prices of medicines that are made available to the public—called average wholesale price or list price–are quite high  because they are the starting point to which the rebates are applied.  However, for people with no medication insurance, this is exactly the price that they pay for medicines.  Therefore, the most vulnerable people are charged the most.  It thus becomes clear why many patients simply do not fill the prescriptions that they need to survive–they cannot afford them..

         Physician Guy Culpepper wrote on LinkedIn recently that CVS Caremark, the PBM of the CVS Health corporation, was “only covering the Brand name version of some prescription drugs. So if I prescribe the generic, in an effort to save my patient money, CVS Caremark will refuse to cover it.” Even though the real price of the generic drug is less than its equivalent brand name version, in this case the PBM gets a bigger rebate for covering only the brand name version. That rebate, however, is not always passed onto the customer but instead can result in a higher price and more out-of-pocket cost. Culpepper called rebates “kickbacks” and went on to state that “PBMs will pretend these ‘rebates’ will mean lower costs for the payers. But if that were true, PBMs would support transparency to show us just how much money they save us.”

         If rebates save money, that is of course not obvious to consumers, who face ever-increasing prices for medications and higher out-pocket-costs. An analysis of the effects of rebates on the cost of drugs for seniors covered by Medicare Part D showed that they increased both out-of-pocket costs and Medicare drug spending. Once again, the place to start fixing this problem is with greater transparency. Regulators should demand that rebates be publicly disclosed, if not eliminated altogether. Corporate secrets do not seem justified in managing our very expensive healthcare system. We desperately need to reduce healthcare costs so we can extend quality coverage to more people. Rebates turn out to be a very poor method of controlling drug costs and more likely to improve corporate profits instead.

         As we have noted before, Americans spend much more for healthcare than citizens of other high-income countries and get the poorest outcomes in terms of lifespan. Medication spending is one factor that is increasing our exorbitant healthcare cost outlay and several of the elements we have discussed here also have the potential for jeopardizing patient well-being. Our concerns about electronic prescribing do not undermine its many benefits, but prior authorization, overworked pharmacists, and rebates to prescription benefit managers are all detrimental in their present form to the public’s health, create dissatisfied healthcare professionals, and function poorly to control costs. 

In all of these cases, the minimum step to improvement would appear to be legislating increased transparency. We need to know how insurers and their PBMs determine what prescriptions require prior authorization and reform that process so it truly saves money without tying up doctors with endless phone calls and delaying treatment; we need chain pharmacies to publish their work flow metrics and make it clear that driving pharmacists to the point where mistakes are unavoidable is unacceptable; and we need drug companies and PBMs to reveal the complex tangle of drug rebates and, if necessary, regulate this process so that it truly serves to reduce costs instead of forcing people to accept more expensive drugs and higher out-of-pocket costs. Making medication unaffordable to some people is obviously not in the service of improving healthcare.

         A person who needs to fill a medication prescription is likely suffering from a medical problem that is at very least distressing and uncomfortable and at worst painful and even life-threatening. Getting the prescription filled should not add to the patient’s woes, nor should it drive healthcare professionals to distraction or unnecessarily drive up healthcare costs. It is time we examine carefully the whole system by which we get our medication and intervene when necessary at every step of this complex and often mystifying process.

[1] Disclosure, a Critica team member, Jack Gorman, owns a small amount of stock in the Rite Aid Corporation.

The Vaccine-Preventable Diseases

Critica Begins a New Series About What We Are Vaccinating Against

Part One: MMR

Although a large majority of Americans believe that vaccines are safe and effective, a sizable minority are “vaccine hesitant” and worry about the safety of childhood vaccinations. One reason often cited for vaccine hesitancy is that many of the diseases we vaccinate against are rare in the U.S. today, so new parents have never seen them. Without a mental image of what a vaccine-preventable disease looks like and does, people may have trouble appreciating the risk it poses. If the risk of the disease seems, mistakenly as we will see, negligible, then any alleged risk of the vaccine against it becomes more believable.

         Simply telling people that the risk of a serious reaction to a vaccine is one in several thousand does not have the weight of a single anecdote of a child who has suffered from one of the rare serious adverse side effects.  Similarly, just telling people that a vaccine-preventable disease causes “X” number of serious complications may not be convincing. But seeing children with these conditions is highly persuasive.

One member of the Critica team began his medical career in pediatrics and is old enough to have seen cases of measles, mumps, rubella, and diphtheria. For him, the idea that anyone would hesitate to vaccinate against these diseases is hard to grasp. Recently, he was describing how serious the complications of measles can be to a young parent who had never seen a case. She seemed surprised that measles can be, and still is, sometimes fatal. “I don’t think people my age have any clue what the diseases we vaccinate our children against can actually do,” she said.

         And so we decided to begin a series of articles describing some of the vaccine-preventable diseases, starting here with measles, mumps, and rubella, the targets of the MMR vaccine.  That’s the one, of course, that was once accused of causing autism, a thoroughly false claim that we won’t bother getting into here. Nor can we counter vivid anecdotes of serious vaccine reactions with vivid anecdotes about children with these illnesses because we have none: despite the recent and alarming increases in measles cases in the U.S., it is still uncommon and most pediatricians will go through an entire career without ever seeing a single case of measles, mumps, or rubella.

         We do hope, however, that these short descriptions of vaccine-preventable diseases will serve as a reminder that there is a very good reason we vaccine against them: at very least they make young children thoroughly miserable for a week or more and at worst they cause severe and sometimes fatal complications.

Measles Still Kills

         Measles, also known as rubeola, is caused by a remarkable virus. Remember that all the cells in the human body have genes composed of a molecule called DNA and that DNA is transcribed to RNA, which then begins the process of engineering protein production. The measles virus, on the other hand, has no DNA. It contains only RNA and is therefore called an RNA-virus. The same thing is true of the virus that causes AIDS, HIV. That makes it easier for the virus to incorporate itself into an infected person’s own cells and redirect what they do.

         And just like HIV, the measles virus has the uncommon ability to suppress an infected person’s immune system for months and sometimes longer.

That makes people who get measles susceptible to getting infected with other viruses and bacteria, increasing the risk of serious complications.  

         The measles virus lives in an infected child’s nose and throat and is spread to others by coughing and sneezing. It can actually live in the air of a room where someone coughed for as long as two hours. An infected child can spread measles to other people from four days before a rash appears to four days after it appears. Because measles often starts with symptoms that are similar to the common cold, like coughing, runny nose (coryza), fatigue and loss of appetite, a person can spread it before even knowing they have measles. And spreading it is easy; measles is one of the most contagious viruses known. As many as 90% of people who come in contact with someone with measles will get it.

         The rash is what gives away the diagnosis of measles. Shown in the illustration below, It often starts with little white spots, called Koplik spots, inside the cheeks. A day or two later the characteristic red, bumpy, blotchy, and somewhat itchy rash starts on the face and neck and spreads throughout the rest of the body, all the way to the feet. During this week to ten days of unfolding symptoms, a child with measles feels awful. While it is true that you can only get measles once, that one time is memorable for the patient, who spends the time with watery eyes, coughing, sneezing, feeling very weak, and itchy all over. High fever makes it impossible for the child to do much and kills his or her appetite. If you are a parent with a child who has measles, you are likely to say to yourself “even though I know this is going to go away, I wish my child didn’t have to suffer like this.”

A young, very unhappy looking girl with the rash typical of measles (source: Shutterstock).

         Unfortunately, measles doesn’t always “just go away.” While most children recover completely, others have serious complications. Before the measles vaccine became available, measles killed more than 2 million children every year. That number has been drastically reduced since the measles vaccine was introduced, but unvaccinated children can still succumb to measles. In 2018 there were about 140,000 measles deaths around the world, mostly in children under five years old. For every 1000 children who contract measles, two die. Most of those occur when the measles virus infects the lungs, causing pneumonia, or the brain, causing encephalitis. Measles can be especially devastating for children with suppressed immune systems, including children with cancer. Infants who have not yet been vaccinated against measles are also particularly vulnerable to the serious complications of measles infection.

         Against all of this, it seems clear that vaccinating children to protect them from measles is a very good idea. The measles vaccine is a “live attenuated virus” vaccine, meaning that the naturally occurring measles virus is changed by growing it in cell cultures to a form that is incapable of causing measles but still able to stimulate the immune system to make antibodies against it. If at any point a vaccinated person is exposed to the real measles virus, the immune system will then immediately attack the virus and prevent the vaccinated person from getting sick


The leading cause of death from measles is pneumonia. The measles virus can infect the lung, as seen in this illustration where the white patch in the right lung represents the infection (source: Shutterstock). 

As we mentioned earlier, we are not going to debunk here the completely erroneous claims that the MMR vaccine causes autism or any other damage to the brain or immune system. It is true that there are people who have allergies to all kinds of things and that includes the MMR vaccine. But just keep this in mind: fewer than one in one million people have a serious allergic reaction to MMR vaccine; two out of every 1000 children will die from measles. Consider those odds.

Mumps: A Silly Name for a Serious Disease

         The word “mumps” sounds a bit silly and perhaps encourages people to think of the illness as something that is not very serious. That is, of course, a mistake. The mumps virus belongs to the same family of viruses as the measles virus (the paramyxoviruses) and, like measles, it first infects the nose and throat and is spread mainly by coughing and sneezing. Also like measles, mumps is highly contagious.

         Although rarely fatal, mumps infection can cause serious and sometimes permanent damage. The hallmark of the illness is swelling of the salivary glands, especially the parotid gland. This is very uncomfortable and often painful, but almost always resolves in about a week without complications. More serious is infection by the mumps virus of the testicles, a condition called orchitis. About 10 to 20% of post pubertal boys and men get orchitis with mumps. In about half of those cases, there is permanent damage to some of the internal structures of the testicles, resulting in permanent reductions in sperm count and impaired fertility.

This poor child shows the characteristic swelling of the parotid gland, one of the salivary glands, caused by the mumps virus (source: CDC). 

         Another complication of mumps that occurs in about 10% of infected people is meningitis, an infection of the membrane the lines the brain and spinal cord. Although uncommon, mumps can cause temporary hearing loss and sometimes permanent deafness. In fact, it was once one of the leading causes of deafness in children.

         Swollen testicles, impaired fertility, brain infection, and deafness, then, are all some of the potential complications of having mumps. Doesn’t sound as if most parents would want to take the risk of withholding the vaccine, does it?

Rubella: the German measles

         It is true that rubella (the “R” in MMR) is usually a very mild illness. So mild, in fact that many people don’t realize they have it or think it’s just a routine viral infection.  And that is the problem, because the serious threat from rubella infection (also known as “German measles”) occurs when it strikes a pregnant woman. Because rubella can be so subtle and because people who have it are contagious for a week before the typical rash appears, it is not possible to simply try and keep pregnant women away from someone with rubella. The damage can be done by someone who doesn’t know they have it or to a woman who doesn’t yet know she is pregnant.

         The rubella virus belongs to a different family of viruses that measles and mumps (togavirus), but like them is an RNA-virus. Children with rubella may have fever, runny nose, swollen glands, red eyes, and a pinkish rash. It’s also spread by coughing and sneezing. It goes away by itself in a few days and rarely causes any further complications or damage. So why bother with a vaccine?

         The answer is that rubella infection in a pregnant woman, especially if she is in the first trimester, causes the congenital rubella syndrome. The baby whose mother was infected is born with cataracts, deafness, and heart defects. There may also be problems with other organs, growth retardation, and intellectual disabilities. That is, of course, if a baby is born: rubella infection can also cause miscarriage and stillbirth.

This illustration shows viral particles, like the rubella virus, infecting an unborn fetus. The baby will likely have the congenital rubella syndrome (image: Shutterstock). 

         When the same member of the Critica team who started his medical training in pediatrics was in college in 1973 he got rubella. He didn’t feel that sick, but he had a rash all over his body and went to the student health service. He remembers the doctor there saying, “I’ve seen deaf babies because of German measles, you are going into isolation in the hospital.” One interesting thing about this case is that he was told as a child that a viral illness he had was German measles, but that could not have been possible because you cannot get it twice. Rubella infection was confirmed by a blood test when he got it in college and that means what he had as a child had to have been something else. That’s how easy it is to mistake rubella for a different viral infection and hence to unwittingly spread it to a pregnant woman.

         It is true that if you don’t vaccinate your child against rubella, someone else will suffer the consequences: the newborn of an unvaccinated pregnant woman whom that child infects. It would seem to take a very high degree of self-centeredness and cynicism to use that as the excuse for not administering a completely safe vaccine against rubella to every child.

A Reminder

         We will do this reminder about what the diseases do that we don’t see much of anymore because vaccines prevent them again soon. Next up will be the diseases prevented by the DPT vaccine (diphtheria, pertussis, and tetanus). It will be more harrowing stories of children dying or being born with terrible congenital abnormalities. Next time someone tries to tell you that vaccines hurt children, you might show them this commentary and correct that impression: it is the diseases that vaccines prevent that hurt children.

A Statement from Critica

To the extent that we scientists humanize our work, science can take us a bit of the way in helping solve the very long-standing lack of racial justice in the United States. Epidemiology tells us, for instance, that black Americans are disproportionately affected by COVID-19, climate change, and air pollution. Economics research shows how vast income inequality among the races in the U.S. is.. Research in neuroscience, psychology, and sociology help us understand how racial bias forms and the strategies for overcoming it.

If our work at Critica teaches us anything, however, it is that science is never value neutral. Physical laws and chemical processes may be value neutral, but the human beings who study, experiment, and interpret them are never without impact on the work of science.  

         That is why it is imperative that Critica affirm its commitment to the deeper humanization of society in the face of an overwhelming moral wrong. Our Board and Officers are united in condemning what we are learning is widespread police misconduct, something we should have known long before the tragic death of George Floyd. We applaud those who have joined the international protest movement against racial injustice and add our voice to its principles and demands. We pledge to do more within our own organization and ourselves to overcome the crippling biases to which we are all subject, however aware or unaware of them we may be.

         It is of vital importance to us that whenever we are wrong, we acknowledge our mistake and fix it. That policy usually applies to a scientific belief; if we ever back something that is incorrect, we hope we will be ready and willing to correct our error when the evidence steers us in that direction. Overcoming racial prejudice is more difficult because no matter how well-meaning we are, no person is immune to it. Our Board and Officers are entirely composed of white people; we recognize the critical need to be diverse. We commit to understanding how we created a monoracial institution, developing greater awareness of ourselves as racial beings, and becoming a place that is authentically welcoming to people of color who are interested in Critica’s mission and vision.

This organization is founded upon the principles of a book called Denying to the Grave.It is no small irony that this could describe so much of white America’s response to racism. It offers the tools to help people use science in their best interest and for society’s greater good. Our skill set is designed to help us overcome specious, unconscious beliefs such as implicit racial bias. Our moral and scientific lives require that we use them now.

Is Information Overload Hurting Mental Health?

Endless access to information during COVID-19 might be making matters worse.

Not surprisingly given the severe nature of the threat of COVID-19 and the economic downturn we are facing, experts are now predicting that the next “epidemic” will be an epidemic of mental illness and suicide. There are urgent calls to deal with the potential uptick in suicides. Nearly half (45%) of Americans are saying that their mental health has already been affected by the crisis. And while we don’t have a lot of information about the effects of something like the nation-wide physical distancing measures we have seen, we do know from prior research that large-scale disasters and emergencies have led to high rates of mental health problems across the population. 

It seems pretty clear, and understandable, that this crisis may result in increased rates of mental health problems. But from where exactly are these problems arising? Certainly the economy, most likely isolation from physical distancing, and general anxiety about a highly contagious pathogen are factors. Practical barriers such as kids being home from school and thus creating more stressful work situations are also part of it. But there’s another area that people aren’t addressing a lot and that is this notion of “information overload” and how it affects mental health. We should just assume that people are experiencing a sense of information overload right now given how much new information is constantly coming out about COVID-19 and how high stakes most people consider this information to be. 


Source: Shutterstock

So what are the mental health consequences of information overload? It turns out that they may be quite significant and go beyond just a few moments of feeling overwhelmed. Information overload can lead to real feelings of anxiety, feeling overwhelmed and powerless, and mental fatigue. It can also lead to cognitive issues such as difficulty making decisions or making hasty (often bad) decisions. Hasty decision-making comes about because the brain is literally exhausted from trying to process all the information. This is why some researchers prefer the term “cognitive overload” to “information overload.” Processing large amounts of information is often done while multitasking—looking at social media while working, etc. Multitasking in particular has been shown to increase the release of the stress hormone cortisol as well as the hormone adrenaline, which are both associated with the “fight-or-flight” response. 

What can be done to improve mental health in the midst of all this information (or cognitive) overload? We can certainly expect to continue to see a high volume of new information coming out daily, so it’s not realistic to believe we can have any impact on that side of things. However, there are some simple ways individuals can work on limiting their access to the constant deluge of new information. 

  • Schedule times to look at the news: No matter where you get your news from, it’s not a good idea to have a constant stream of it available throughout the day. This approach is most likely to lead to a lot of multitasking, which can increase your anxiety levels considerably. Schedule a time of day and a certain amount of time to look at the news. Set a timer to hold yourself accountable and ensure that you don’t get carried away. Try to choose a time of day that isn’t inherently anxiety-provoking for you for some other reason. If you’re feeling especially anxious on any given day, skip it and don’t look at the news at all. 
  • Turn off notifications on your phone: A lot of people have “push” notifications on their phones that alert them to new headlines. These are almost always a bad idea. Push services are especially associated with information overload because they give us information when we’re not even looking, causing us to multitask which also increases anxiety. Stay focused on what you’re doing and turn the notifications off. 
  • Be careful about checking social media: Recognize that social media is a news source, whether you like it or not. You may think you’re just doing something social and catching up on your friends’ lives but nowadays because so many people publish news articles on social media, you can expect to be confronted with a number of headlines. You should be intentional about checking social media the same way you are about checking the general news headlines. Pick a time of day to do it and set a timer. 
  • Don’t look at your phone before bed: This advice is true all the time but it’s especially important now. The light on your phone can keep you awake if you look at it too close to bedtime and so can reading anxiety-provoking news items or updates from friends and family. Try to avoid your phone for at least 30 minutes prior to going to sleep (and preferably longer). If your friends and family need to get in touch with you, they will call you.

It’s always important to take care of yourself but pay special attention to your level of mental fatigue right now. If you’re feeling exhausted and fuzzy, it may be time to try as hard as you can to take a break from the deluge of new information we’re all facing each day. 

The Cochrane Controversy

And the existential crisis for evidence-based medicine

Evidence-based medicine faces a smoldering, slow-motion crisis as every few months new headlines pop up about a controversy between Peter Gøtzsche and the Cochrane Collaboration. There are both personal and political sides to the story, with magazine stories focusing on palace drama and intrigue, but what may seem like an arcane debate is likely to have significant ramifications on how data are analyzed and what makes a scientific fact a fact.

For over 25 years, patients and experts have come to trust Cochrane Collaboration’s evidence-based systematic reviews to provide authoritative guidance on which medical treatments work and which ones don’t. Cochrane believes so strongly in this type of analysis, their logo is literally a figure from one of their scientific reviews that concluded premature babies were safer if their mothers received a medication prior to birth. Their work has reinforced important treatments we now take for granted, like folic acid can prevent spina bifida. Busy doctors use their summaries routinely, if not daily, and the reviews are widely respected, having “a well‐deserved reputation of excellence,” according to John Ioannidis, an evidence-based medicine expert at Stanford. Of course, despite the best efforts of the reviewers, uncertainties remain, particularly related to the availability of high-quality evidence, but these reviews are, arguably, the closest thing medicine has to a gold standard.

The crisis has a number of angles that border on insider gossip. Briefly, Cochrane has behavior guidelines for members to ensure they do not compromise the perceived neutrality of the collaboration. Gøtzsche, a founding member but well known “firebrand” for his rigid views on evidence quality, bias and transparency, has been accused of repeatedly violating those guidelines and was voted off the board. Gøtzsche and his defenders argue his efforts are in defense of science, pushing for the highest standards of evidence and trying to ensure Cochrane reviews lack bias through transparency. His methods may be unorthodox or impolite, but they should not be silenced. 

Juxtaposed on other internal debates, like whether Cochrane’s business model should be top-down or grassroots, the disagreement has unfortunately become personal and acrimonious with lawsuits and factions. (See here, here, or here for more on the controversy.) With the media’s focus on individual narratives, Gøtzsche is portrayed either as heroic rogue scientist, fighting against subtle biases that inevitably arise when a growing proportion of our medical data come from capitalistic enterprise, or as an out-of-touch champion of a halcyon era of evidence-based medicine where rigorously examined numbers and data could tell the whole story, regardless of context. 

These simplistic narratives of the idealist vs more practical realists only touch on the larger philosophical debates at issue (that I’ll address in more detail below). But, most importantly, there is also a key question that touches the core of science and epistemology, the answer to which is likely to affect how science should be assessed and adjudicated: What constitutes scientific evidence and how do you weigh it?

The crisis at Cochrane is inseparable from increasing questions about the problems with evidence-based medicine (EBM) in practice. Clearly, EBM has seen a meteoric rise. Resistant to “eminence-based medicine” – a tongue-in-cheek way to describe medicine practiced by experienced physicians with little evidence behind their decisions – the movement to rely primarily on evidence for clinical decision-making is not even 30 years old

But, in what could be argued is a mid-life crisis, in 2014, prominent evangelists of EBM noted its unintended foibles and suggested reform. The debate, then, can be framed as one of diagnosing the problem and deciding what to do about it. Cochrane relies on what is widely agreed is the highest quality evidence, the randomized controlled trial (RCT), published in peer-reviewed journals. But, some argue, that data are often biased, both in individual instances of RCTs, and in the fact that most RCT data comes from industry-funded sources. 

Specifically, Gøtzsche’s critics contend that his views about the safety of certain medications like the HPV vaccine or antidepressants: “consistently expresses the most extreme views in the most dramatic and misleading way.” As he told Undark, Gøtzsche argues that industry interests dominate the available evidence, and Cochrane’s reliance on it makes it “a servant to the industry.” With the cost of RCTs inexorably rising, Gøtzsche’s concern seems well placed, as only industry has the financial means to undertake the large, expensive trials that Cochrane uses for their analyses.  

Yet critics also argue that RCTs are stilted: their experimental conditions are so controlled as to be artificial, bearing little resemblance to actual clinical practice. While “positive” trials can show statistical significance, they may not have much meaningful clinical relevance, a conundrum neatly captured in Richard Lehman’s whimsical piece about whether to prescribe spironolactone for an elderly patient with many co-morbid diseases, where the physician recounts the concerns over analyzing complicated data from various clinical trials only to realize the patient is most concerned about quality of life rather than living as long as possible. Therefore, reformists argue, RCTs are important, but must be weighed not only against other types of evidence, but against the individual circumstances and context a patient may be facing at the time of their illness. These EBM reformists argue now against the unthinking application of guidelines, checklists or algorithms, and support individualization of care if necessary. This has given rise to a now well known term: the tyranny of the RCT. There are unintended consequences of relying too heavily on RCT evidence, with scientists questioning decades of dental dogma about flossing a prominent recent example.

As one BMJ article by Jefferson and Jørgensen points out, the process of publishing RCTs can lead to “unfathomable bias” simply because of the massive distillation required to turn the thousands of pages of details needed to run an RCT into a tight 10-page journal article. Such inevitable compressions and distortions lead the authors to argue that Cochrane may need to ignore such journal articles: “By the law of Garbage In Garbage Out, whatever we produce in our reviews will be systematically assembled and synthesised garbage with a nice Cochrane logo on it,” they wrote.

Jefferson and Jørgensen argue we should index all trial information to avoid the bias-through-distillation problem. This labor-intensive project would allow more people to peer into that specialized circle of the clinical trial so we can view them all together in context. This is an important step, but does not seem to get at a major issue of concern shared by them and Gøtzsche: the preponderance of evidence comes from industry-funded studies. 

And that gets to the crux of the debate, but one that scientists seem to be talking around. If work done by the Cochrane Collaboration either to index or review high-quality trials is a public good and worthy of funding, then the data to produce those reviews should also be considered a public good. Having industry take part in clinical trials is not an issue as long as there is a significant proportion of data released with minimal concerns about conflict of interest. Yet, as the proportion of trial data coming from industry has increased, there hasn’t been a concomitant outcry for the production of more publicly funded data. Decades ago, government support for clinical trials was instrumental in the famous cases of polio and influenza vaccines. Rather than super-basic research, should governments fund more applicable science, as suggested in this New Atlantis article on Saving Science?

Moreover, despite Gøtzsche’s efforts to push Cochrane away from bias, the question is not the foolhardy errand to try to do away with all bias, but, after minimizing it to the greatest extent possible, how to be transparent and reflective about whatever biases surface in the data, as Greenhalgh and colleagues note.

RCTs and EBM have revolutionized medicine. But how scientists move past these known flaws will determine how we weigh different types of evidence. It’s unfortunate that such a weighty issue is getting lost in lurid headlines. 


Using This Time to Focus on Your Health in Healthy Ways

Critica COO Catherine DiDesidero posted the note below on her Instagram feed and we felt that everyone should see it:

I’m pretty convinced that the obesity epidemic in America is a large contributor to the vast and rapid spread of Covid-19. This article suggests I’m correct. Indeed, obesity has been shown to be a risk factor for serious complications from infection with the novel coronavirus that causes COVID-19. Obesity is one of those blanket terms that can be applied to various aspects of health.

Another epidemic America was suffering before all of this was the one where we back burner our health and general well-being in favor of other responsibilities – also seconded by this article in relation to blood pressure and hypertension (one doesn’t need to be classified as “obese” to have underlying conditions). One thing that I think this pause has gifted is time– to start creating new and healthy habits. And time to realize that everything else is secondary…HEALTH IS MOST IMPORTANT.

You can’t take care of anyone else until you care for yourself first. Providing for a family is much easier to do from a healthy state of mind and body. So get started now. Learn quick and healthy recipes to integrate into your diet. Find a class or trainer and build your workout into your routine. We can control a lot of these risk factors with a more proactive approach. Taking care of yourself is the best protection you can hope for. Stay home. Stay safe. Be healthy.


The evolution of a connected world

By Peter McKenzie-Brown

Editor’s Note: How did we get to this point in which cell phones are ubiquitous and dominate our lives? Here, Peter McKenzie-Brown reminds us that there are some downsides to constant cell phone use and then reviews for us the fascinating history of how we have become progressively “wired.”

The Canadian city I live in, Calgary, got top marks in the last report from The Economist Intelligence Unit (EIU), which ranked it as the most liveable city in North America, and number five in the world – after Vienna, Austria; Sidney and Melbourne, Australia; and Osaka, Japan. Two other Canadian cities, Vancouver and Toronto, were also in the top ten.

The EIU index ranks the world’s 140 largest cities on 30 factors bunched into five categories. These include political and economic stability, for example, health care, culture and environment, education and infrastructure. In the most recent report, Vienna topped the list. It ranked ahead perfect 99.1 out of 100, putting it just ahead of Melbourne. Sydney and Osaka. Then came Calgary. According to the EIU report, “higher crime rates and ropey infrastructure pull some bigger cities like London, New York and Paris down the league table, despite their cultural and culinary attractions.”

Yet as I walk the streets of this city, or get on public transit, I’m always amazed to observe that endless majorities of people on sidewalks, on trains and buses, in restaurants, and even in parks seem to spend endless hours on the communication devices that seem to dominate their lives. Personally, I can’t imagine spending a walk in my favourite park staring at an iPhone – in my case, a device I parked some years ago. The more I watched this behaviour, the odder it seemed. I quickly found a great deal of online research that worried about our collective online obsession.

Sure, it’s easy to pass a cell phone addiction off as something that comes with the technological advances of the last 20 years; however, with cell phones come real risks. For example, a study at Temple University’s College of Health Professions and Social Work compared the volume of text messages college students with the amount of neck and shoulder pain they experienced. The result was no surprise: The more you text, the more pain you are likely to experience.

There’s also the matter of dangerous driving. America’s Insurance Institute for Highway Safety not surprisingly found that drivers who use cell phones while behind the wheel were four times more likely to have an accident than those who did not. What’s more, using a hands-free device instead of a hand-held phone doesn’t improve safety.

 Finally, there are sleep disturbances. Using a cell phone before bed can keep you awake, according to a study conducted by Wayne State University School of Medicine in Detroit and researchers in Sweden. Conducted over an 18-month period, the study involved 35 men and 36 women between the ages of 18 and 45. Their conclusion? The radiation emitted by cell phones disrupts sleep patterns.

I’ve always been interested in how things develop, so I started investigating the origins of today’s intensely interconnected world – one that hosts both promise and risk. The balance of this paper shows how this took place nearly two centuries to evolve.

How did this begin?

The world began to get wired with the invention (in present-day Germany in the 1840s) of the electrical telegraph. These point-to-point text messaging systems used coded pulses of electric current to transmit text messages over ever-longer distances.

Albeit brief and ineffectual, the first scientific attempt to illustrate the speed and power of electricity dates back to a 1764 experiment by Jean-Antoine Nollet. A French physicist during the Enlightenment, he had also been a deacon in the Catholic church and was thus able to call on former colleagues to help him with his work.

To test the speed of electrical transmission, Nollet gathered hundreds of lengths of iron wire, roughly two hundred monks, and an array of Leyden jars. These primitive devices, which stored static electricity, were the discovery of a Dutch physicist at the University of Leiden in 1746 – hence the name. (Independently, German inventor Ewald Georg von Kleist had developed a similar device the year before.) The French monks distributed themselves in a circle a mile or so in circumference, each holding a length of wire in each hand to link himself to compatriots on his right and left. Without a word of warning, Nollet discharged the contents of the batteries into the wire, sending an electric shock through the chain of monks.

Nollet was unable to successfully measure the actual speed of electricity with the experiment since all the monks reacted to the electric shock simultaneously. His notes recorded that transmission speed of electricity was extremely high and appeared to transverse the circle of monks almost instantaneously. To entertain the king of France, he later conducted the same “experiment” on 180 French soldiers.


The Nollet experiment may have planted the seed for the concept of telegraphy—the transmission of data over long lengths of wires using only electrical impulses. However, it has nothing to do with the origin of the word “telegraph,” which originally did not involve wires at all. The term originated with Frenchman Claude Chappe, but the system he developed was mechanical rather than electrical. His invention was a system of semaphores, with people signaling with flags from tower to tower. Napoléon used the system to coordinate his empire and army, and other European states copied the system.

Today, the word telegraph suggests dots and dashes transmitted by Morse code over long-distance cables which ultimately yield telegrams. But the word was originally a reference to Chappe’s semaphore system and used no electricity at all. There were moveable arms on the top of the towers and operators could use telescopes to read these mechanical messages from other towers. Thus, these towers could be quite a distance apart. This system could transmit messages quickly and efficiently, so the French government built a national network. The French word télégraphe comes from the French télé (at a distance) and graphe (writing) – thus, “far writer.”

Before moving on to electrical message transmission, it is worth noting that the Leyden jar did contribute significantly to serious science. Around the time American was gaining independence, American rebel and diplomat Benjamin Franklin used one to show that lightning is an electrical discharge.

Franklin called a series of linked Leyden jars, which can store greater electric charges, a “battery.” Unlike modern-day batteries, no matter how many of these devices were linked together, they released all their energy in a single burst.

That said, this early electrical storage system did not entirely end up on history’s junk heap. In miniaturized form, a descendent of the Leyden jar is hard at work in most of today’s electronic products. Today, it’s called a capacitor. Charged by an electrical current, these devices still release their charge all at once. Their instant charge/discharge operates the flash attachments on cameras, for example, and tuning dials on radios. They also control loudspeakers, making music audible and measured, rather than an incomprehensible burst of sound.

Electrical Telegraphy

In the 1790s, at the tail end of the Enlightenment, an argument about electricity between two Italian scientists—Luigi Galvani and Alessandro Volta—led to Volta inventing the first true battery. For the first time, electricity could be put to continuous work. This led to experiments using steady electrical currents for message transmission.

As we have seen, the Napoleonic empire desperately needed a new, high-speed communications system – preferably one that used wires and could instantly reach places that were beyond line of sight. These systems did not develop until Volta’s battery became widely known, however – well after the war was over.

Inventors came up with many schemes for encoding information electrically. As is often the case, the most successful approach was the simplest. The telegraph code still bears the last name of its American inventor, Samuel Morse, who developed the system in 1838. The system required a single wire, which made the system simple and less expensive than others. In addition, Morse’s approach reduced the complexity of the technology by putting it into the hands – literally – of the operator, who needed to learn to both send and receive Morse code.

In the beginning, there was a widespread view that the dot-and-dash system would be too difficult, but it turned out to be a bit like learning to play a musical instrument. Not everybody, but some became quite skilled. Once they mastered the system, they could quickly and easily send and receive messages.

By the second half of the nineteenth century, nations across the world had created commercial telegraph networks, with local telegraph offices in most cities and towns. These systems enabled people to send messages telegrams to anyone, for a fee. Although an 1854 attempt failed, telegraph companies were ultimately successful in laying submarine telegraph cables, which created a system of rapid communication between continents. By 1865, the Morse system was becoming the standard for domestic and international communications in Europe and much of the Americas, and in distant parts of the European empires.

These networks permitted people and businesses to transmit messages across continents and oceans almost instantly, with widespread social and economic impacts. Telegraphs are still in use, although teletype networks have been replacing them for a hundred years.

Canada’s Telephone?

Did Canada really invent the telephone? We Canucks think so, and the first long-distance tests certainly took place on Canadian soil. That said, inventor Alexander Graham Bell – a Scot who had migrated to Canada with his family as a child – did his work in Boston, became an American citizen and was one of the founders of media giant American Telephone and Telegraph, now known as AT&T.

It was in Boston that the telephone – it did not yet have a name – first showed signs of life. On March 10, 1876, Bell used the instrument in Boston to call his colleague, Thomas Watson who was in another room and out of earshot. He famously said, “Mr. Watson, come here – I want to see you” and Watson soon appeared at his side.

Continuing his experiments during a visit to the Bell homestead in Brantford, Bell brought home a working model of the device. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant six kilometres distant, indicating that he was ready. He then made a telephone call via telegraph wires and heard faint voices replying.

The following night, he amazed guests as well as his family with a call from his parents’ home to the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, a town in Ontario thirteen kilometers distant. Often called the world’s first long distance call, this test demonstrated that the telephone could work over long distances, and Canada now recognizes the Bell homestead as a national historic site.

Commercialization of the telephone soon began. In the earliest days, instruments were paired for private use between two locations. Users who wanted to communicate with persons at multiple locations had as many telephones as necessary for the purpose.

Later telephones took advantage of the exchange principle which developed for telegraph networks. Each telephone was wired to a telephone exchange established for a town or area. For communications outside this exchange area, trunks were installed between exchanges. Networks were designed in a hierarchical manner until they spanned cities, countries, continents and oceans.

Going Wireless

These developments were soon superseded by other technologies that transformed human connectivity. Known to our grandparents and as “the wireless,” the radio transmitted signals through the transmission of signals by the modulation of electromagnetic waves.

In 1895, Italian inventor Guglielmo Marconi became the first person to “cut the cord” of electronic communications, sending wireless signals across the Italian countryside. In 1900 he patented this invention, calling it tuned, or syntonic, telegraphy. We call it the radio, and it quickly broke new ground.

Italian-born Marconi studied physics and became interested in the transmission of radio waves after learning of the experiments of the German physicist Heinrich Hertz. He began his own experiments in Bologna in 1894 and soon succeeded in transmitting a radio signal which he could receive three kilometres away.

Receiving little encouragement for his experiments in Italy, he went to England two years later. He formed a wireless telegraph company and was soon sending transmissions from distances 15 kilometers and more. In 1899, he transmitted a signal across the English Channel. That year, he also equipped two U.S. ships to report to New York newspapers on the progress of the America’s Cup yacht race. That successful endeavour aroused widespread interest in Marconi and his wireless company.

To put the wireless in perspective, electrical telegraphy had sped up the spread of information from a few days or weeks or months to a few hours. Reporters could receive the news, write it up, send it to print in a newspaper, and people would read about it, perhaps, half a day later. As the radio developed, numerous people could hear news broadcasts at the same time. As radio networks developed their programming, it became the most powerful medium yet invented for spreading information and shaping public opinion.

Marconi’s greatest achievement came on December 12, 1901, when one of his wireless systems at Cornwall, England, successfully transmitted a message (simply the Morse-code signal for the letter “s”) across the Atlantic to St. John’s, Newfoundland – then, a British colony; today, a Canadian province. That transatlantic transmission won him worldwide fame.

Ironically, detractors of the project had been correct when they declared that radio waves would not follow the curvature of the earth: Marconi’s transatlantic radio signal had indeed been headed into space but bounced off ionosphere and back toward Earth. Much remained to be learned about the laws of the radio wave and the role of the atmosphere in radio transmissions, and Marconi played a leading role in radio development and innovations for three more decades.

Experiments in television development began in the 1920s, but The Great Depression and World War II slowed development. Once the war was over television ownership exploded.

From Miniaturization to Wi-Fi

In the post-war world, Japan led the miniaturization of electronics, and in the mid-1950s created tiny, wireless radios as small as your hand. Bearing the word “transistorized” on their body, they were the first electronic devices in North America to also bear a Sony logo.

By an odd series of coincidences, these devices were first exported to Canada in the summer of 1955, and there they created quite a stir. They amazed their new owners, who were accustomed to furniture-sized radios plugged into an outlet. North America learned about them from the excitement of those lucky enough to own one.

From those days on, miniaturization has been the trend for communications devices – a trend that began to accelerate in the 1990s, with the rapid growth of the World Wide Web. Today our iPods and other devices fit easily into our pockets, and they make functions available that once required a telephone, a camera, a movie camera, a television, paper calendars, accounting spreadsheets, books, publishing houses – the list goes on, and on, and on. The social media that are part of this panoply are relatively new phenomena; to have a post go viral is many a player’s ultimate dream.

A family of wireless networking technologies commonly used for local area networking of devices and Internet access, Wi‑Fi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term to products that meet its technical protocol.

From the beginning, the primary goal of this organization was that Wi-Fi devices work across all vendors and, as new devices become available, be “backward compatible” in the sense that they would continue to work with older devices – including the original devices made according to this protocol. In this way, the alliance responded to growing demand for Wi-Fi with new technologies and programs that increase connectivity, enhance roaming, and – the organization’s wording – “improve the user experience.” Members of the Wi-Fi alliance now produce desktop and laptop computers, smartphones and tablets, smart TVs, printers, digital audio players, automobile scanners, automobiles, monitors, drones, facial recognition cameras and countless other devices that would have been largely unimaginable at the beginning of this millennium.

In the years since the Enlightenment, electrical devices from telegraphy through radio and radar have played key roles in every aspect of our lives, both during times of peace and war. As I write these words, Wi-Fi devices are moving into a fifth generation of development. The lesson from that reality, perhaps, is that electrical devices are playing ever-more-subtle roles in an electrifyingly complex new world.

A Pandemic On Top of an Epidemic

Is There a Loneliness Epidemic and Will COVID-19 Make it Worse?

Former U.S. Surgeon General Vivek H. Murthy declared a loneliness epidemic in 2017, triggering a plethora of headlines. Now, many are asking whether steps like shelter-at-home and social distancing that are necessary to control the COVID-19 pandemic will exacerbate that loneliness epidemic and increase physical and behavioral health disorders and mortality.

         In an interview with Boston public radio station WBUR earlier this year, Murthy noted research using “rigorous scales” has found that “more than 20%…of the adult population in America admits to struggling with loneliness.” According to the federal Health Resources and Services Administration (HRSA), being lonely increases the chance of dying by 45%, making it as dangerous as obesity and cigarette smoking. Studies link loneliness and isolation to problems with the immune system and to increased risks for heart disease, stroke, cancer, and depression. Although living through a pandemic is not officially listed as a qualifying trauma in the official psychiatric diagnostic manual (DSM-5), there is evidence that social isolation and quarantine can provoke many symptoms characteristic of posttraumatic stress disorder (PTSD).

         Neuroscientists have weighed in on possible mechanisms for the adverse effects of feeling chronically lonely on health. A study in mice showed that solitary isolation led to decreases in the size of neurons in the two brain regions studied, increases in the stress hormone cortisol, and decreases in a brain growth factor called BDNF. In humans, a study using functional magnetic brain imaging (fMRI) showed much less activity in a brain region necessary for experiencing reward, the ventral striatum, in lonely people than non-lonely people. In a not yet peer-reviewed report, scientists compared the effects of an acute social isolation challenge (10 hours of sitting alone in a nearly empty room) to food deprivation and found that the same region of the brain, the substantia nigra, was activated by both conditions. The substantia nigra sends neural projections to the ventral striatum to trigger the release of the neurotransmitter dopamine, which is associated with both craving and reward. Thus, craving food and craving social contact appear to cause similar brain activity and social isolation leaves the brain in a state of deprivation reminiscent of extreme hunger.

Social isolation and loneliness may lead to decreased activity in the brain’s reward pathway that goes from the substantia nigra (shown here in the middle of the brain slice) to the ventral striatum (shown here just above the substantia nigra) (source: Shutterstock).

Loneliness May Not Be of Epidemic Proportions

         Not everyone agrees that there is in fact a loneliness epidemic, however. A 2019 article looked at the data about loneliness and concluded that overall they don’t support the notion of an ongoing loneliness epidemic. “There is an epidemic of headlines that claim we are experiencing a ‘loneliness epidemic,’ writes Esteban Ortiz-Ospina, “but there is no empirical support for the fact that loneliness is increasing, let alone spreading at epidemic rates.” Although many authors point to an increase in the last century of the number of Americans who live alone as evidence that loneliness must be increasing, it is also important not to conflate social isolation with loneliness. For example, many people who live alone do not report feeling lonely.

         Regardless of whether loneliness is an epidemic or not, it is clear that more people than perhaps ever in the last century are undergoing enforced social isolation, forced to stay away from school, work, family, and friends. And experts do agree that even if loneliness is not as widespread in the U.S. as the dramatic headlines might suggest, it is a risk factor for multiple poor health outcomes. Hence, the risk that social isolation and loneliness will produce health problems even beyond the ultimate resolution of the COVID-19 epidemic is a reasonable concern. From a behavioral health viewpoint, a recent paper in Lancet Psychiatry states that “A major adverse consequence of the COVID-19 pandemic is likely to be increased social isolation and loneliness…which are strongly associated with anxiety depression, self-harm and suicide attempts across the lifespan.”  A review of previous quarantine situations, like those during outbreaks of H1N1 influenza, SARS, and Ebola also raised the possibility of increased alcohol abuse and a particularly adverse effect of loneliness on the elderly.

Although neurological signs and symptoms have been noted to be part of the COVID-19 illness for some patients, the paper notes that right now we do not know if and how the novel coronavirus that causes COVID-19 (SARS-CoV-2) gets into the brain. But “post-infectious fatigue and depressive syndromes have been associated with other epidemics and it seems possible that the same will be true of the COVID-19 pandemic.” Indeed, a survey has shown that 84% of Americans believe that if social distancing continues longer than they expect, it will adversely affect their mental health. Most ominous in this regard are studies linking social isolation and loneliness to an increased risk for suicide, leading three authors to call the link between suicide and COVID-19 a “perfect storm” in an article last month in JAMA Psychiatry.

Thus, it is clear that attention must be given to the development of loneliness that will affect an unknown number of people subjected to forced isolation during the pandemic. For these people, the effects of being isolated on their health and well-being may well last for months or even years after the pandemic is officially declared resolved.

How To Combat Loneliness

         Multiple authors have already weighed in on ways to mitigate the loneliness effects of social distancing. For many of these interventions, however, there are limited data about effectiveness. For example, we do not know the extent to which socializing via video platforms like Zoom with friends and family works to relieve some of the adverse effects of loneliness. One study from the University of Pennsylvania actually showed that cutting down on social media reduces loneliness, but that was done before technology became our only way of staying connected.  Elderly people who are also lonely may not be fluent with video conferencing platforms.

         In a Perspective essay in the New England Journal of Medicine, Betty Pfefferbaum and Carol S. North remind us that “After disasters, most people are resilient and do not succumb to psychopathology. Indeed, some people find new strengths.” But they go on to warn that a variety of negative emotions are inevitable the longer social distancing rules remain in effect. “In the current pandemic,” they write, “home confinement of large swaths of the population for indefinite periods, differences among the stay-at-home orders issued by various jurisdictions, and conflicting messages from government and public health authorities will most likely intensify distress.” While we all are likely to suffer some emotional distress during and even after the pandemic, a subset of us will feel profoundly lonely and prone to the physical and emotional disorders known to be associated with loneliness.

It is not known what interventions may help decrease loneliness while practicing social isolation but using the internet to establish social contacts could be helpful (source: Shutterstock).

         What can we do? People vary in how they experience loneliness and there are many interventions that have been tried to ameliorate loneliness. This makes it hard to make firm, evidence-based statements about what might work, especially when we are facing so unique a situation as the current pandemic. From studies that are available we would recommend the following as possibly effective interventions:

1. Attempt to schedule a regular one-on-one meeting with an isolated individual by video conference or telephone. This can be once per week or more frequent but should be on a predefined schedule.

2. Use the internet as much as possible to establish social contacts, but limit its use for acquiring news about the pandemic.

3. Encourage group activities by video conference. There are innumerable opportunities now to join in groups from a diverse range of interests and the isolated person can be encouraged to join one or more activity and discussion groups online, even if he or she only listens.

4. A pet may help

We cannot vouch that these will ultimately be proven effective if and when high-quality studies are done but given the state of the evidence they seem among the most promising. We need to make sure people understand that while on the one hand emotional distress is expected at this time and shared with a huge group of people, it is still painful, and everyone’s distress has unique elements. Knowing that the pain is a shared phenomenon may help but dismissing it as “just what everybody is experiencing” will not. When people use telemedicine for any reasons, a clinician should inquire about mental health issues and distress and query whether the individual has social support. Telementalhealth is also now increasingly available for people who need more intense and professional interventions.

   We conclude that there is reason to doubt that loneliness is a true “epidemic” at this time, but it is likely to emerge as a significant comorbidity from this pandemic and pose all kinds of long-term threats to health and well-being. At the very least, we must be certain that we are identifying people who are suffering from loneliness and do our best to relieve their discomfort and establish some social contact for them.

0n Climate-change Denial

By Peter McKenzie-Brown

“Let not men say ‘These are their reasons; they are natural;’ for, I believe, they are portentous things unto the climate that they point upon.” – William Shakespeare, Julius Caesar

Some months ago, an American friend sent me a link to an article on climate change from the Washington Post. The compelling story described the impact of a changing climate on one island among Les Îles-de-la-Madeleine – a chain of islands off the shores of Québec, our primarily French-speaking province.

“The more than 12,000 residents of this windswept Canadian archipelago are facing a growing number of gut-wrenching choices, as extreme climate change transforms the land and water around them,” Brady Dennis wrote. The “12,000 residents of this windswept Canadian archipelago are facing a growing number of gut-wrenching choices, as extreme climate change transforms the land and water around them. Season after season, storm after storm, it is becoming clearer that the sea, which has always sustained these islands, is now their greatest threat.” He added that, like much of the rest of Canada, the Magdalen Islands, as they are known in English, have warmed 2.3o Celsius since the late 19th century – twice the global average.

The Arctic is an important and fragile ecosystem, he continued, and it’s warming at a faster rate than much of the rest of the world. Scientists are already seeing dramatic reductions in Arctic sea ice cover, particularly in the summertime. This shrinking sea ice disrupts normal ocean circulation and creates changes in climate and weather around the globe. “Season after season, storm after storm, it is becoming clearer that the sea, which has always sustained these islands, is now their greatest threat.”

I live almost 5,000 km away from the Magdalene Islands, in the Texas-sized province Alberta. I appreciated the item, and posted it on Facebook and on Microsoft’s LinkedIn, which is supposedly a social media site aimed at professionals. There were likes from Facebook, but I could never have imagined the outcome from the other social media platform.

An instructor at Canada’s online Athabasca University took the first stab at me.  “BS,” he opined. I’ve forgotten his name, but his bio boasted a master’s degree. “Surely that’s academic short-hand for “beautiful story,” I replied.

Then the trolls began to swarm. Over the course of several hours, perhaps 150 chimed in. Most of them essentially said I was full of it. One invited me to visit his website, which “proved” that climate change is nonsense. What he forgot to mention was that his sorry blog asked visitors for donations.

Another person observed that every island undergoes erosion. How stupid, he implied, to suggest that the problem had to do with the loss of the sea ice that used to encase the islands most winters, shielding them from the Atlantic’s (higher) currents. The “Wa-Po” article is dumb, someone else added. The rest – I suspect, without proof, that they were this instructor’s students – mostly wrote mindless things, including jeering at my self-descriptor on LinkedIn as a “writer, author and historian.” After a while I went to bed, but in the morning, the trolls were still at work.

I went online to find out how to stop this nonsense, and remove its inanity from the site and my memory. Removing it from the platform was easy. It didn’t so easily leave my memory.

Puzzling it out

But it was so odd. The polite behaviour of Canadians is well-known. Even more so, however, is the science of climate change, which goes back two centuries. As the industrial revolution charged on, in the early 19th century, science began to speculate on ice ages and other natural changes in paleoclimate. The science of those days also began speculating on the possibility of a natural greenhouse effect.

For non-Canadian readers, it is worth putting the western province I live in, Alberta, into context. About the size of Texas, but with one sixth the population, this province is a big natural gas producer and an important supplier to North American markets of various grades of crude oil. More importantly, it also hosts the Athabasca oil sands – the largest known reservoir of crude bitumen (an ultra-heavy oil) – on the planet. In addition, we have huge bitumen deposits at Peace River and Cold Lake – a field that borders the more easterly province of Saskatchewan. Given these realities, you could say that these two oil-producing provinces have skin in the petroleum game. After all, since 1930 the western provinces have been the owners of most of the country’s vast natural resources. After serious volumes of oil production began in 1947, those governments have been taking in substantial royalties during the good times. Especially in good times, when oil prices are high, provincial coffers are full. This has become an addiction.

 “Climate change is a serious threat to development everywhere,” said Rajendra Pachauri, who served as chair of a 2007 United Nations conference on the topic in the Pacific island paradise of BaliHe said – I take this from a magazine article I wrote at the time – “today, the time for doubt has passed. (We have) unequivocally affirmed the warming of our climate system and linked it directly to human activity.”

To make sure there was no doubting his message, he added that “slowing or even reversing the existing trend of global warming is the defining challenge of our age.” According to Pachauri, global warming would lead to melting ice caps and rising sea levels, the drowning of some island nations, the extinction of species, desertification of tropical forests, and more frequent and deadlier storms. The world’s media soon became focused as never before on greenhouse gases (GHG) – the emissions (mainly carbon dioxide and methane) causing Earth to warm and its climates to change.

The occasion was a United Nations conference meant to negotiate national targets for reducing greenhouse gases. The US, Canada, and Japan became villains in the piece as they argued that the targets of the 1997 Kyoto Protocol were unrealistic. To live up to that agreement would have required Canada, for example, to cut its GHG emissions by perhaps 50 per cent during the next twelve years.

The three villains complained that Kyoto required nothing from emerging economies like China and India, which were big polluters even in those days. They and others also observed that, at the time of the original Kyoto discussions, science had little understanding of the impact on global warming of tropical deforestation. Deforestation amounts to destruction of some of the vital CO2 reservoirs often called “carbon sinks.” Factor in the loss of sinks from rainforest destruction, and Brazil and Indonesia become the world’s third- and fourth-largest GHG emitters.

At the beginning of these comments, I quoted Shakespeare’s famous passage on climate change from Julius Caesar. There are no trolls in that great historical play, but before he spoke – and before Caesar’s assassination – Casca spoke of signs he has seen I encountered on that memorable evening that gave him pause. “all the sway of earth shakes like a thing unfirm,” he reported. “I have seen tempests, when the scolding winds have rived the knotty oaks, and I have seen the ambitious ocean swell and rage and foam, to be exalted with the threatening clouds.”

He continued, “but never till to-night, never till now, did I go through a tempest dropping fire. Either there is a civil strife in heaven, or else the world, too saucy with the gods, incenses them to send destruction.” If that weren’t enough, he saw a common slave hold “up his left hand, which did flame and burn like twenty torches join’d, and yet his hand, not sensible of fire, remain’d unscorch’d.”

For another minute or so, he continues in this vein, describing unworldly horrors he has seen in the streets of Rome. “They are portentous things unto the climate that they point upon,” he concludes. Today, more than ever: think wildfires in many parts of the world; record flooding in many parts of the world; and, according to many reports, heat records.

Don’t End Social Distancing Yet

Theories That We Can End the Coronavirus Pandemic by Reopening Things Now Lack Scientific Support

As the coronavirus pandemic continues in the U.S. and around the world, it is natural that people are becoming increasingly weary and frustrated with stay-at-home and social distancing orders. We are also all fearful about the economic repercussions of the COVID-19 outbreak, which affect the most vulnerable people with the lowest incomes among us. We yearn for the time when we can finally go back to work and to socializing in person with friends and family.

         These understandable emotions make some of us prone to believe a theory that everyone but the elderly and chronically ill should return to work and school right away. According to this notion, isolating the elderly and those with chronic underlying medical illnesses, who are the two groups most likely to develop serious complications from coronavirus infection, while allowing everyone else to get infected will ultimately lead to the development of widespread (also called “herd”) immunity to the virus and it will simply peter out.

         We noted with alarm, then, a widely circulated article titled “Epidemiologist: Coronavirus could be ‘exterminated’ if lockdowns lifted,” which we first saw in an online publication called WND News, a politically conservative news source. The piece, which was picked up by other online media, was written by Kurt Wittkowski, Ph.D., who is described as “former head of the Department of Biostatistics, Epidemiology and Research Design at the Rockefeller University in New York City.” He asserts that our current efforts to “flatten the curve” of new coronavirus infections actually widen it and that “the only thing that stops respiratory diseases is herd immunity.” Given Wittkowski’s credentials, his ideas may be harder for some to recognize as faulty.

         Most urgent is the concern that people will read what he says, believe it, and relax social distancing efforts. The result of prematurely relaxing those efforts, scientists tell us, would be an overwhelmed healthcare system and even more deaths than currently predicted.

         We will focus on two questionable aspects of Wittkowski’s proposal: 1.) that we could protect the elderly and medically vulnerable by isolating only them and not the rest of the population and 2.) that herd immunity will develop in a matter of weeks if the virus is allowed to run rampant through the 80% of the population believed to be less vulnerable to serious complications and death.

Isolating Only the Elderly Unlikely to Work

         It is true that except for those with underlying chronic illnesses like diabetes and asthma, most children who get infected with the coronavirus experience either no or mild symptoms. But it is not true that only elderly people can get very sick from coronavirus infection. Adults of all ages, especially those with conditions like diabetes, hypertension, and chronic lung diseases and those who are undergoing chemotherapy for cancer or taking steroids on a long-term basis, can get very sick and die from coronavirus infection. Who will get serious complications from the coronavirus depends on multiple factors and is hard to predict. In some cases, although less common, even young and healthy adults have died.

When children go to school, they risk not only passing the virus among themselves but also to teachers, school administrators, school bus drivers, janitorial and kitchen staff, and so forth. These adults can then pass the virus around among themselves and inevitably to elderly people with whom they come into contact. Instead of flattening the curve and reducing the number of new infections and deaths, doing this according to our best mathematical models would prolong the pandemic, cause more people to get infected, and result in more deaths.

         Moreover, flattening the curve is essential to preventing our already strained healthcare system from becoming completely overwhelmed. Critica’s Chief Medical Officer, David Scales, reports that the New York City hospital at which he works had to break down walls to create space for additional ICU beds to accommodate very sick COVID-19 patients. And, he tells us, many of those patients are less than 65-years old. Doctors, nurses, and other healthcare professionals are already stretched to the maximum; allowing the virus to run rampant throughout the population as would happen under Wittkowski’s proposal, would only make this already intolerable situation worse.

Herd Immunity Won’t Occur in Just a Few Weeks

         What about his suggestion that if we just allow people considered less vulnerable to serious complications from COVID-19 the virus would be exterminated “within weeks?” The problem here is that at present we have very little notion about what kind of immunity people who have recovered from COVID-19 actually develop. Let’s take a look at some basic immunology here for a moment.

         One major line of defense against viruses is the development by the immune system of antibodies (aka immunoglobulins) that recognize proteins on the surfaces of a specific virus. These antibodies are produced by a type of immune cell called the B lymphocyte. There are five categories of antibodies, and two classes are particularly important for the fight against a viral invader: IgM and IgG. B cells produce IgM first, but these antibodies tend to be “generalists” and not specific to a particular virus. IgM antibodies typically show up in a person’s blood about ten days after being infected with SARS-CoV-2, the virus that causes COVID-19. A few days later, IgG antibodies, which are more specific than IgM, start to appear and the person usually—but not always–begins to recover.

Antibodies that can neutralize the COVID-19 virus take up to two weeks to develop and are more effective in some patients than others in promoting recovery (source: Shutterstock).

         For some viral illnesses, a second exposure to the virus causes a much more rapid antibody response and the virus is neutralized before it has a chance to set in and cause disease. Before the measles vaccine was available, for example, most people who had measles developed lifelong immunity and reinfections were uncommon. For other infections, however, the virus mutates rapidly, and the IgG antibodies produced the first time around are no longer effective. This is the case for viruses that cause the common cold and the flu.

         Scientists think that people may develop immunity for a year or more after a first encounter with the coronavirus, but this is not yet certain. It is also possible, we are told, that people who have no or mild symptoms the first time may have such a weak initial immune response that they do not develop any form of immunity to repeat exposures to the virus. That means they could be reinfected and contagious again. Indeed, a troubling report from the South Korean CDC identified cases of coronavirus reinfection in people who had recovered from a first infection.

         So Wittkowski’s prediction that allowing the bulk of the population to be exposed to the coronavirus would result in the rapid development of herd immunity is based on no science at all. Right now, the only appropriate public health response to the pandemic is to keep everyone in relative isolation by staying at home, social distancing, and closing of schools and other public spaces. There are already data that this approach, rather than allowing unchecked infection, is working. Once the new infection rate has dropped substantially, we can then ease up on those measures and institute a policy of widespread testing for the virus and for antibodies against it and quarantine only those with active infection. A cogent plan for national surveillance and testing has been advanced. But we are not at that point yet and probably won’t be until this summer at the earliest.

Social distancing appears to be working to reduce the number of new cases of and deaths from COVID-19 (source: Shutterstock).

         Anti-viral drugs will hopefully be developed soon that will lessen the burden of severe complications and death from coronavirus infection. Ultimately, we should have a vaccine against it that will cause protective antibodies to be produced without itself inducing illness. Ordinarily it takes many years to develop a safe and effective vaccine and, in some cases, like HIV infection, even years of research do not produce one. With the focused international effort now underway, and the fact that this coronavirus mutates slowly, it is hoped that a vaccine might be available a year from now, although that is probably a bit optimistic.

         We all want to see our families and friends in person again, to be able to shop and go to the movies, and to go back to school and work. As the pandemic drags on, it is understandable that we are going to find any theory that would allow those things to happen right away very attractive. These theories are unfortunately now not uncommon across the internet. Our fear with denialist ideas like those of Wittkowski and a handful of other doctors and self-proclaimed experts is that they will weaken our resolve to stay at home and practice social distancing. Those are among the most essential ways we have right now to control the pandemic and maintain a functioning healthcare system. The notion of rapidly developing herd immunity to the coronavirus is unsubstantiated by science and at present dangerous to follow. We all can’t wait to be let outside again. Unfortunately, waiting is exactly what we must do.