As vaccine efficacy decreases over time, it is generally estimated using a standard Cox or Poisson model that assumes constant vaccine efficacy over time. However, this is less accurate over short periods of time and it indicates the effectiveness of vaccines rather slowly.
Researchers from the University of North Carolina, the University of Washington and the US Food and Drug Administration are proposing to adapt a Cox model with two time indices, incident times measured from the start of the study in calendar time, and the log effect of the vaccine effect.
The research can be found at medRxiv* preprint server.
Study: Reliable assessment of the duration of protection of COVID-19 vaccines. Image credit: LookerStudio / Shutterstock
The researchers simulated a clinical trial that mimicked the enrollment pattern in a BNT162b2 study and the trend toward severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection that occurred in the United States while the trial was underway.
They assumed that the corresponding hazard rate (VEH) vaccine efficacy (VE), which typically represents an exponential deterioration of the vaccine effect over time after vaccination, of the hypothetical vaccine drops from a peak of 95% at 7 days after the second dose to 70% six months after vaccination. The mean values for VE on average over time (VEConst) over 1,000 replicates are 94.4%, 89.9% and 81.6% over 0-2, 2-4 and 4-6 months, respectively.
The underestimation of the true level of decline in VEConst is made even clearer, as vaccinations typically coincide with an early peak in the incidence of infections, and then this incidence rate decreases for several months afterwards. This leads to a high percentage of exposures in the first part of each two-month interval when the true RE is higher than VEConst suggests.
Another trial was simulated in which the enrollment period was moved to six months later, ensuring that the period with the most significant vaccine effects coincided with the lowest point in exposure rates. The mean value of VEConst over 1000 replicates in this experiment was 94.7%, 89.5% and 78.8%, again over 0-2, 2-4 and 4-6 months, respectively. VEConst is much closer to true VE in this second experiment, but in both cases is significantly less close to truth than the VEHR curve.
It is possible that neutralizing antibodies that provide short-term protection may decrease log-linearly, which will lead to decreasing RE over several months. However, post-vaccination VE could be maintained on a plateau for a long time due to cell-mediated or memory-immune responses that remain close enough constant over time. To investigate this, the researchers simulated two more experiments in which VE was allowed to reach a plateau 5 and 3.5 months after full vaccination. In the first five-month trial, VE is overrated by VEConst and underestimated by VEHR. In the second experiment, both estimates give reasonable approximations of true VE. However, VEHR allows faster detection of non-linear changes in VE over time, which can only be detected with VEConst over a longer period of time.
Estimation of vaccine efficacy against symptomatic COVID-19 based on 6-month follow-up in four simulated clinical trials. In the first two trials, the true VEHR (“truth”) (linear in loghazard ratio) drops from a peak of 95% at full vaccination lasting one month to 70% at 6 months after full vaccination. In the experiment depicted in panel A, most participants received dose 2 at a calendar time that coincided with a peak in the infection rate, whereas most participants in the experiment depicted in panel B received dose 2 at a time of low infection rates. In the experiments depicted in panels C and D, the true VEHR plateaus after 5 and 3.5 months, respectively. In each trial, VEConst is obtained over 0-2 months, 2-4 months and 4-6 months after full vaccination, and VEHR is estimated under the Cox model, where the log hazard ratio is a piecewise linear function of the time since vaccination, with switching points 0, 2 and 4 months after full vaccination. For each experiment, the mean and standard deviation of each estimate over 1000 replicates are displayed.
Phase three trials only provide efficacy information six months after dose 2 due to the transfer of placebo recipients to the vaccine arm. For more information on the long-term efficacy of vaccines, observational studies are more useful and tend to allow estimation of VE against severe disease and against different strains, even in different subpopulations. VEHR offers similar advantages over VEConst in the assessment of the level decreasing in VE in an observation setting. Reduction in RE over calendar time or since vaccination may be caused by a decrease in immunity, the emergence of new variants or additional factors. Comparison of RE at calendar time provides better assessment of RE declining due to declining immunity, and taking different calendar times provides better evaluation of declining due to new variants.
The authors emphasize that their newly proposed approach, based on estimation of VEHR, improves the sensitivity to evaluate the true duration of VE using data from both observational studies and phase three clinical trials, as it allows VE to vary continuously by time after vaccination as well. adjustment for changes in disease incidence over calendar time. They point out that this approach was used effectively in a study in RE in North Carolina and argue that analyzes of observational data should adjust for demographics and comorbidities as well as other factors to reduce confusing bias. This information can be crucial for public health politicians and epidemiologists trying to model the disease and could help inform future policies about the spread of the disease and the need for booster shots.
medRxiv publishes pre-printed articles that have not yet undergone peer review. The information in this article should not be treated as facts or used to guide research or clinical practice.