Health misinformation is no longer just an online nuisance – it’s a public health crisis.
Distorted science is seeping into exam rooms. Conspiracies are eroding trust. Far-reaching falsehoods are putting lives at risk.
Faculty, students and alumni are leading a determined resistance, turning to research, education and advocacy to restore evidence and trust as cornerstones of care.
At her busy allergy clinic inside a London, Ont. hospital, Dr. Samira Jeimy sees a steady stream of patients who have taken over-the-counter “food sensitivity tests.” Marketed online or sold in health food stores, the pinprick blood tests will routinely detect IgG antibodies to a hundred or more foods.
The problem? The tests are clinically useless.
“I can predict with 90 per cent accuracy it will say corn, wheat, dairy, sometimes egg,” says the assistant professor and clinical immunologist at St. Joseph’s Health Care London. But the antibodies simply show that people have been exposed to the foods, not that they’re allergic or sensitive to them.

“We really can’t do it from the sideline. We have to get engaged in the battle zone.” — Dr. Samira Jeimy created @allergies_explained on Instagram to push back against misinformation and off er practical, evidence-based education
“Without a proper assessment, people can end up cutting out dozens of foods unnecessarily, and that can lead to nutritional deficiencies, social isolation or anxiety around the food.”
It’s the sort of misinformation doctors deal with every day, and it’s been getting worse.
In fact, according to the World Health Organization, we’re living through a medical misinformation “infodemic,” and it is a serious threat to people’s health. Doctors and public health professionals are struggling to understand where misinformation comes from, how it spreads and how it can be combatted.
“I can tell you with certainty that the threat of misinformation is uniformly understood and is one of the top priorities of public health agencies and health authorities,” says Maxwell Smith, PhD, a bioethicist in Western University’s Faculty of Health Sciences and instructor in Schulich Medicine & Dentistry’s Master of Public Health program.
In January, the Canadian Medical Association released a survey that found 35 per cent of Canadians had avoided effective health treatments because of health misinformation – up six points from the year before – and 23 per cent had experienced an adverse reaction following health advice they found online.
An obvious example of the problem is the resurgence of measles in Canada and the U.S., which together have reported more than 2,000 cases so far this year.
Once considered eliminated, measles is back, in part because many vaccine-skeptical parents aren’t having their children vaccinated.

“People are just inundated with waves and waves, like a tsunami, of information, and they’re trying to pick out the signal from the noise.” — Dr. Ken Milne is the host of The Skeptics’ Guide to Emergency Medicine, a podcast that helps doctors make sense of new, peer-reviewed research.
New tools, new challenges
Although misinformation can come from anywhere – television, a newspaper, a neighbour – today’s deluge is largely driven by social media. Facebook alone has 3.4 billion users, and X claims another 600 million.
In addition to being widespread, social media is an ecosystem that in many ways favors misinformation. One study found posts on X that contain misinformation received more engagement, spread more rapidly and reached more users than truthful ones.
Belief in misinformation is often more about who you trust than what you know. For instance, researchers who looked at 1,541 Canadians found that people who were more hesitant about vaccines had lower trust in institutions. This might have made them less likely to believe public health authorities, and more likely to trust information shared by people in their own networks.
And now, even as health professionals are struggling to catch up with the problems of social media, some are concerned that artificial intelligence could lead to an “AI infodemic.”
Although large language models (LLMs) like ChatGPT have proven useful for summarizing information in a readable format, they may pass along errors, over-generalize findings, or even “hallucinate” information that never existed. LLMs can also be used by bad actors to intentionally create false but convincing information.
Dr. Benjamin Chin-Yee, assistant professor in pathology and laboratory medicine, is among the growing number of Schulich Medicine & Dentistry experts driving a greater understanding of the impact of AI on health information – and disinformation.
He recently co-authored a study in which large language models (LLMs), like ChatGPT and DeepSeek, were asked to summarize the results of more than 500 peer-reviewed health research papers.
The findings? LLMs tended to over-generalize the results, dropping qualifiers and making findings seem stronger and more convincing than they really were.
“Over-generalizations produced by these tools have the potential to distort scientific understanding on a large scale,” Chin-Yee writes in a recent editorial for The Conversation, a piece that has drawn more than 5,000 readers. “This is especially worrisome in high-stakes fields like medicine, where nuances in population, effect size and uncertainty really matter.”
As more and more patients turn to sources like ChatGPT for medical advice, experts are keen to understand how accurate the tool is – and whether it’s a reliable source for guidance on personal health issues.
Dr. Amrit Kirpalani, MD’15, a paediatric nephrologist and assistant professor, is exploring how AI tools can be used in medical education. In his clinical practice, he’s noticed the growing number of patients and their adult caregivers arriving with information that is incomplete, misleading or simply wrong.
Recently, in a study examining how good AI was at diagnosing complex medical conditions, his team explored how ChatGPT responded to 150 written cases designed to test health professionals. The cases are rich in detail, with titles like “A 25-Year-Old Mother With Joint Pain Who Feels Faint,” and “After a Wild Party, a 24-Year-Old Has Intense Abdominal Pain.”
Their findings, which garnered international media attention, found you wouldn’t want to count on ChatGPT to give you an accurate diagnosis. Overall, it got only about half of the diagnoses right. It also had a hard time interpreting lab results and diagnostic imaging. And it frequently overlooked important information.
But the AI excelled at one thing, Kirpalani says.
“Its answers were so well-written, clear and confident that they sounded convincing,” he says.

“People who have less access to health care may be more likely to look for health information online.” — Dr. Amrit Kirpalani investigates how AI tools can enhance medical education amid a growing tide of misinformation.
A post-truth era
Researchers have identified some characteristics that make misinformation more likely to spread, and some traits that make certain people more likely to believe certain things.
According to the American Psychological Association, people are more likely to believe information that comes from an “in-group,” such as someone who shares the same political beliefs. And they are more likely to believe something that they have heard repeated often.
People with more education and a greater capacity for abstract reasoning tend to be less susceptible to misinformation. On the other hand, people who are overconfident in their ability to distinguish true from false headlines are also more likely to believe misinformation.
But belief in misinformation isn’t about ignorance of the facts, according to a paper on vaccine-hesitancy co-authored by Smith. In fact, people who were vaccine hesitant tended to have an “information surplus.”
The problem is, they don’t know how to prioritize the right information.
“It’s not that they're ignorant or simply impervious to education campaigns. They might just be getting their information from the wrong places,” Smith says.
Kirpalani points out many people are vulnerable to online misinformation because they don’t have anywhere else to turn. In Canada, where 1 in 5 people don’t have a primary care provider, this is a real risk.
“It's been established that people who have less access to health care may be more likely to look for health-care information online,” he says.
And it’s not just patients who may find themselves lost in the deluge of information, says Dr. Ken Milne, an emergency physician and adjunct faculty member in family medicine.
Even doctors can have a hard time keeping up and sorting the good from the bad, he says. “People are just inundated with waves and waves, like a tsunami, of information, and they’re trying to pick out the signal from the noise."
A long-time advocate of getting evidence-based health information out of journals and into the public sphere, Milne has produced and hosted The Skeptics Guide to Emergency Medicine podcast since 2012, with a goal of bringing new, peer-reviewed information to doctors in a timely and digestible way.
He started the podcast when he read that it took, on average, 10 years for new scientific information to make its way from peer-reviewed publication into clinical use.
“And I'm like, that's crazy. The world is connected through the internet, through digital media, at the speed of light,” he says.
Smith points out that, beyond a proliferation of information, there are also bad actors who are making a lot of money from medical misinformation.
“It's not simply that misinformation just emerges and then is spread in this passive way,” says Smith. “There are people with many, many millions of dollars, big organizations, that are actively trying to spread misinformation.”

“This requires a whole of society approach... that means the private sector, and that means laws and regulations.” — Bioethicist Maxwell Smith, PhD, is calling for broader, society-wide action to curb the harms of health misinformation.
Countering the chaos
Researchers have tested several strategies to combat online misinformation: debunking false claims by fact checking, “prebunking” by warning people in advance, promoting digital and health literacy, or using subtle nudges to prompt critical thinking.
But even these commonsense tools have drawbacks. One recent Nature Human Behaviour study found these sorts of interventions heighten skepticism towards all information, true or false. For some, expert fact-checking only deepens mistrust, especially among individuals with anti-authoritarian views.
And when credible information is scarce or hard to understand, misinformation quickly fills the void.
That’s why the solution often starts with something simple, says Jeimy, like showing up online and filling the gap with evidence-based facts.
“The reality is, social media is one of the best public health tools we have. We really can't do it from the sideline. We have to get engaged in the battle zone,” she says.
After COVID-19 broke out, Jeimy began working with other health-care professionals to distribute accurate information online. She continued posting, and now her Instagram @allergies_explained has more than 100 articles busting myths and providing basic information on allergies and health generally.
One of her most popular posts debunks the idea that children are getting respiratory illnesses due to an “immunity debt” they acquired from being isolated during the pandemic.
“These babies were not alive in 2020. It doesn't make any sense. Yes, it drives me a little bit nuts,” Jeimy says.
As a master’s student at the University of Toronto, alumna Kayla Benjamin, BMSc’19, realized many of her peers had similar questions about their health and well-being. Because they had access to academic literature, they were able to answer a lot of these questions, accurately, on their own.
“But in terms of what was available for folks that weren't turning to academic literature, we found that it was just a flood of misinformation,” Benjamin says.
With fellow students Clara MacKinnon-Cabral and Manvi Bhalla, she launched missINFORMED, an education and advocacy organization that provides health information focusing on women and gender diverse people. Today, the website features articles on topics ranging from breast self-examinations to abortion care in Canada to anti-Black racism in health care.
Turning to popular social media channels to debunk and clarify worrisome trends is something that Dr. Brian Rotenberg, a leading sleep surgeon and professor in otolaryngology – head and neck surgery, waded into recently with his research review on mouth-taping, a viral trend influencers have claimed has health benefits ranging from better sleep to sharper focus.
In fact, the claims are not based in science and may even be risky for some, he says in a popular TikTok reel that garnered more than 55,000 views and attracted global news coverage.
“Mouth taping has little to no benefit and in some cases, has risk to people,” he says. “The real harm is for patients who have sleep apnea.”

“Pseudoscience doesn’t enter the brain via logic, it enters by propaganda on repeat.” — Alum and bestselling author Dr. Jennifer Gunter has built a reputation – and massive following – as the internet’s go-to OB/GYN.
Dr. Jen Gunter, a best-selling author, blogger and social media sensation who completed her ob-gyn residency at Schulich Medicine & Dentistry, has a take-no-prisoners approach in her fight against medical misinformation.
Through her blog and her Substack newsletter she debunks a lot of unfounded health claims, most famously, some made by Gwyneth Paltrow and her company Goop – for instance, that it is a good idea to “steam” the vagina.
Although Gunter can be funny and even brutal in her takedowns, she’s serious about the dangers of misinformation.
“Pseudoscience is a cult; if you admit one belief is incorrect, you must look at the whole house of cards, so nothing outside the belief system can be acknowledged. It’s also hard to get through to people, because pseudoscience doesn't enter the brain via logic, it enters by propaganda on repeat, so it’s difficult to remove with facts,” she writes in her newsletter, The Vajenda.
So simply getting good information out there may not be enough. Benjamin and others call for broader efforts to increase scientific literacy and critical thinking, in part through better public education.
In a paper she co-wrote with assistant professor Sarah McLean, PhD, Benjamin and her team called for changes in science education – that it emphasize reasoning and inquiry as much as factual content.
“We need to meet people where they're at. But we also need to build this skill set so they're questioning what is a credible source, they're building science literacy, health literacy skills, or even just the critical thinking skills,” Benjamin says.
Smith thinks that ultimately there will need to be broad efforts that include regulation to help control the spread of misinformation and its effects.
“If you think the health system is going to solve those issues, then you're mistaken. This requires a whole of society approach. And that doesn't just mean government, that means the private sector, and that means laws and regulations,” he says.
Still, the role of health researchers, clinicians and academic experts, remains critical.
For Chin-Yee, whose research showed AI can introduce overgeneralizations, oversimplifications, and ultimately, misinformation, when interpreting research, there is as much onus on the humans as there is on the technology.
“Our research reveals a shared tendency in both humans and machines to overgeneralize – to say more than what the data allows,” he writes. “Tackling this tendency means holding both natural and artificial intelligence to higher standards: scrutinizing not only how researchers communicate results, but how we train the tools increasingly shaping that communication.”
He adds, “In medicine, careful language is imperative to ensure the right treatments reach the right patients, backed by evidence that actually applies.”
While Kirpalani sees the pitfalls of AI, he also believes it can help get useful, life-saving information out there.
The key, he says, will be making LLMs better. For instance, there might be a way to train specialized AI systems only on authoritative medical information. Or general AI systems might be programmed with a medical “toggle,” so that when they’re answering health-related questions they reference only authoritative data.
“I do see potential dangers,” he says. “But I think there is such a potential good here that, when you're in a situation like this, we can't fight it. We need to embrace it, and we need to work with it and make it work for us in a responsible way.”

