Are chatbots better than humans at fighting vaccine hesitancy?

Could artificial intelligence succeed where people have failed in helping people overcome their fears about vaccines?

  • 16 August 2021
  • 6 min read
  • by Priya Joi
Chatbot, Ai Artificial Intelligence technology
Chatbot, Ai Artificial Intelligence technology

 

The debate for and against vaccines has in the past year become more polarised and angry than ever before. But the most vocal anti-vaxxers can distract from the fact that there’s a huge grey area of people who are not necessarily anti-vaccines, but who are hesitant for a variety of reasons.

Exactly how to approach this conversation has escaped even the brightest minds, with research showing that efforts to convince people of the benefits of vaccination can backfire. Artificial intelligence may offer an unexpected answer: chatbots could be better at fighting vaccine hesitancy than people.

Used in the right way, chatbots could be an increasingly important tool in the fight against misinformation.

Reasons for hesitancy

Nowhere has the nuances of whether to vaccinate or not been more evident than in this pandemic. For some, getting a COVID-19 vaccine in less than a year has been a glorious moment in scientific discovery, but for others the sheer speed of development and the novelty of the vaccine, among other issues, has caused them to hold back even when offered a vaccine.

Although incentives may have drawn some people to vaccine centres, for many others no amount of frustration among the scientific community nor insistence from governments seems to get them to budge on their refusal to get vaccinated. 

The reasons why people may be reluctant to get a vaccine are varied and complex. In routine immunisation, poor communication about potential side effects, lack of transparency on the part of health care providers or governments, as well as political issues might cause people to lose trust in vaccines.

With COVID-19, misinformation has been a huge issue right from the start, with conspiracy theories spreading about where the virus originated or side effects from new RNA vaccines. In India, scammers recently injected thousands of people with fake vaccines – filled with just saline solution – instead of real vaccines, which inevitably has helped make people mistrustful. 

For others, hesitancy is not the issue, but instead a fear of side effects that might cause them to lose income, confusion over where to get the vaccine or an inability to access vaccine centres.

And a fact that has surprised many in this pandemic is that health workers have been among those refusing the COVID-19 vaccine. In France, for example, only 57% of caregivers in nursing homes and 64% of hospital staff have been vaccinated, which has led to debates in government about whether the vaccine should be mandatory for those working in health care.

Why AI could work

Yet a survey last year by Steven Taylor at the University of British Columbia and colleagues showed that for all the influence that social media seems to have on personal decision-making, people are far more likely to be convinced by evidence. Of people who said they were unlikely to get the COVID-19 vaccine, the factor most convincing (with 38% saying they would change their minds) was if they could be convinced the vaccine had been rigorously tested. This was followed by 36% who said they would take it if they saw enough people had been vaccinated without negative side effects. Only 4% said they would change their minds by seeing vaccination being promoted on social media.

This need for evidence and unbiased information – no matter what pro-vaccine people believe about the ‘other side’ – seems to hold up. A study in France this year showed that interacting with a chat bot can significantly ease vaccine hesitancy and make people feel more positively about vaccines and getting vaccinated.

In the study, before interacting with the chatbot 145 of 338 participants had positive attitudes toward the COVID-19 vaccine. This went up by 37% to 199 people after the chatbot did its work. Before interacting with the chatbot, 123 of 338 participants said they did not want the COVID-19 vaccine; afterwards this went down by 20% to 99.

Chatbots with personality

Now, health researchers have teamed up with tech researchers to produce chatbots aimed at fighting vaccine hesitancy.

IBM and the Johns Hopkins Bloomberg School of Public Health have devised a chatbot called Vira (for Vaccine Information Resource Assistant) that was trained through discussions with front-line health workers. The bot continually learns as people use it, and feeds the information back to scientists who tweak the responses.

Another chatbot has been developed by Arnaud Gagneurm, a neonatologist and a professor of pediatrics at the University of Sherbrooke and Karin Tamerius. Where Vira offers definitive answers, Gagneur and Tamerius favour a different conversation approach that listens to people’s concerns and offers suggestions instead. As Tamerius said “Giving advice doesn’t work because it triggers a desire to resist. Humans have an innate need for autonomy. When people sense that we’re trying to control them by telling them what to do, it generates distress and anxiety.” 
 
As with any communication, different cultural contexts require different approaches. Vira, which is targeted at young Americans, responds to a question about whether COVID-19 vaccines could alter our DNA with “None of the COVID-19 vaccines will change or interact with your DNA. Pinky swear!”. This language may not work, say, for people in Africa and Asia. What’s more, different countries are experiencing vastly different outbreaks and have different resources and guidance.

To help address this cultural diversity, tech company Twilio has developed an open source chatbot app that has different agent templates – one trained on the American Medical Association’s FAQ for US audiences and one trained on the India Ministry of Health and Family Welfare's FAQ for Indian audiences.

Using Twilio, software developers can customise this chatbot, or build their own to meet their unique needs. For example, Save the Children created a chatbot for Indonesian audiences, offering a female or male agent to chat to, as well as speaking in either English or Bahasa. Twilio encourages people to build and train additional agents and contribute to the open source project so that, as Twilio describes it, they can develop a “global repository of vaccine hesitancy chatbots”.

However, chatbots aren’t foolproof. As we know from experiences with Siri or Alexa, there will always be questions a chatbot can’t answer. There are also people who prefer speaking to another live person, or people whose technology access precludes using a chatbot. For these, there is no replacement for a human that can step in to help, particularly on important issues such as vaccine safety. But, used in the right way, chatbots could be an increasingly important tool in the fight against misinformation.