Before a vaccine or drug is made available to the public it normally needs to undergo clinical trials, some of which involve tens of thousands of people. But, as we have seen with interim results for some COVID-19 vaccines, often only a small proportion, or sample, of these people actually end up exposed to the virus.
Moderna recently reported that its vaccine candidate had an efficacy of 94.5% after phase 3 trials involving 30,000 volunteers began in July – yet only 95 people in the trial have so far actually contracted COVID-19, and only five of them had been vaccinated. Based on these results, the company has said it will have 20 million doses by the end of the year.
The Pfizer/BioNTech vaccine trial that made it through phase 3 trials with an efficacy of 95%, had 170 cases of COVID-19 (it’s worth noting that sample size is different to trial size, which in this case is 43,000 participants). But though the sample sizes may seem tiny, many important drugs or vaccines have been found with similarly small sample sizes.
From HIV to Ebola
Trials with a very small sample size might not seem to have enough statistical “power” to be able to determine whether a drug or vaccine is effective at treating or preventing disease. Yet, for diseases that are either life-threatening (e.g. COVID-19) or rare (certain cancers), trying to test a vaccine in larger numbers of people than are needed to reach a decision on its efficacy could be challenging and even unethical. A key question on ethics is how efficacious the intervention will need to be to make using it worthwhile. COVID-19 vaccine trials were actually geared towards detecting a lower level of efficacy because that would justify their use.
In 2015, when scientists were trying to find a vaccine to stop the Ebola outbreak in West Africa from spreading, the vaccine was declared 100% efficacious after 16 cases of disease in a ring vaccination trial. Back in 1994, the drug zidovudine, used to stop mother-to-child transmission of HIV, was scaled-up based on a trial which was stopped early after 53 cases of disease.
A key factor is that sample size is only meaningful in the context of the efficacy of the intervention. The more efficacious the intervention, the smaller the number of clinical outcomes needed to demonstrate its efficacy. If all of the negative outcomes are in the control group, the value of the intervention becomes clear very quickly.
To put this simplistically, think of a randomised controlled trial comparing jumping out of an airplane from a high altitude with or without a parachute. If the parachute works really well, it won’t take many trial participants’ landings before convincing people the parachute is the way to go.
So why are small sample sizes accepted as being valid?
A group of statisticians writing about small samples sizes in the journal Clinical Investigation in 2012, said “If one starts from the premise that there is considerable uncertainty regarding this unknown quantity, then data from even small numbers of patients in a well-designed clinical trial will make steps towards reducing that uncertainty.”
In other words, a small sample size in clinical trials doesn’t necessarily reduce the importance of the findings.
Researchers have cautioned, however, that some COVID-19 vaccine trials with very small sample sizes – such as Russia’s Sputnik V vaccine which has 20 people with COVID-19 – may be too small.