Since its emergence in late 2019, SARS-CoV-2, the virus responsible for COVID-19, has spread rapidly, causing significant global morbidity and mortality. Although early outbreaks were concentrated in China and Italy, the United States (US) was the global epicenter for most of 2020, accounting for approximately one fourth of all cases globally by March 2021 . Despite the approval of multiple SARS-CoV-2 vaccines, production, distribution, and uptake hurdles combined with the emergence of novel viral variants indicate that achieving herd immunity remains a distant prospect [2, 3]. Therefore, comprehensive testing, contact tracing, and infectious case isolation remain important interventions to continue slowing the spread of SARS-CoV-2 and maintaining health system integrity .
As testing availability has increased, epidemiological questions have arisen regarding the optimal deployment of different test strategies. Diagnostic tests for SARS-CoV-2, none of which perfectly reflect viral carriage , fall into two broad categories: antigen tests and real-time reverse transcription polymerase chain reaction (RT-PCR) tests. While both tests are used as diagnostics, antigen tests detect the presence of a specific viral antigen and are capable of returning results within 15 minutes, while RT-PCR amplify genomic sequences and therefore require longer turn-around times . Substantial attention has been paid to the lower sensitivity of antigen testing compared with that of RT-PCR testing . Early studies on SARS-CoV-2 antigen testing sensitivity reported relatively low sensitivity with respect to RT-PCR, leading some public health officials to place lower confidence in antigen testing than RT-PCR testing in ending the COVID-19 pandemic. However, it has been suggested that when comparing antigen- and RT-PCR tests to viral culture, a proxy for transmissibility, one rapid antigen test (Becton Dickinson Veritor) had a negative percent agreement of 96.4% (95% CI: 82.3, 99.4) and a positive percent agreement of 90.0% (CI: 76.3, 97.6), while positive percent agreement for the RT-PCR assay was 73.7% (CI: 60.8, 85.3) . Another study found that a different rapid antigen test (Quidel SARS Sofia FIA) reported results that reflected viral culture to a similar degree as results of RT-PCR tests during the infectious period, though RT-PCR tests performed better at identifying future or past infectiousness . If confirmed through further study, these findings suggest that antigen tests may perform better than RT-PCR tests at discriminating actively infectious infections, complimenting reports showing that RT-PCR tests detect low levels of viral nucleic acid that may not indicate current infectiousness [10, 11]. Consequently, the two different test types may have different optimal strategies for deployment. The more rapid antigen tests may be more optimally deployed when a delay in test results will delay the isolation of an infectious infection (e.g., ), while RT-PCR tests may be more optimally deployed when diagnosing every infection or disease case is critical (e.g., ).
Recent studies have demonstrated that regardless of test sensitivity, widespread, high-frequency testing is sufficient to significantly reduce the burden of COVID-19 [12, 14, 15]. While these findings highlight the importance of surveillance testing, they do not provide an explicit decision-support framework for clinical and public health decisionmakers for testing strategies in congregate settings. Building on such past work, we explore the population-level epidemiological effects of employing different COVID-19 surveillance testing strategies, including RT-PCR, antigen, and two reflex testing strategies, in two non-acute care congregate living settings that bookend the risk spectrum: a nursing home and a university residence hall system.
The epidemiological model is comprised of a dynamical model of SARS-CoV-2 with a layered statistical model of hospitalizations, ICU hospitalizations, and deaths.
Transmission dynamics model.
We developed a compartmental model of SARS-CoV-2 transmission dynamics with surveillance testing by modifying a Susceptible-Exposed-Infectious-Removed (SEIR) model and including individual-based accounting of infectious state (Fig 1). There are two simultaneous processes in the model: the disease process and the testing process. In the disease process (Fig 1), individuals start as fully susceptible (S) and become exposed (E) according to a density-dependent probability of exposure. This probability is defined as the product of a rate β and the infectious proportion of the population, accounting for quarantine and isolation, as described below (see S1 Equation in S1 File). The value of β is derived from the product of R0, which we assume to be uniformly distributed between 1.2 and 1.5 , and the recovery rate, γ, which we assume to be uniformly distributed between 1/2.6 and 1/6 per day . Exposed individuals have had a successful exposure event but are not yet infectious; they become infectious (I) at a rate σ and are then able to infect others. We assume σ is 1/3 . Infected individuals are removed/recover (R) at a rate γ. The removed/recovered compartment indicates individuals who are no longer infectious, which includes both recovered and dead individuals; we use the classic epidemiologic definition of this term, and do not use it to refer to a clinical recovery. We seed our simulations based on a prevalence of 1.8% in the residence hall setting and 3.6% in the nursing home setting.
Fig 1. Schematic of the COVID-19 epidemiologic model.
Individuals in the population start as susceptible (S) and become exposed (E) at a rate β. Exposed individuals become infectious (I) at a rate σ, and recover (R) at a rate γ. Simultaneously, surveillance testing occurs at a rate proportional to the population make up, and individuals awaiting test results are in a “leaky” quarantine. Tested susceptibles (TS) can become exposed (TE), then infectious (TI), and can infect susceptibles (both S and TS). Since tested infectious individuals (TI) are quarantining, their infectiousness is reduced by a factor q. Likewise, since tested susceptibles (TS), are also quarantining, their susceptibility is reduced by a factor q. Individuals who test positive are isolated (Q). We assume isolation is perfect, and thus individuals can only progress through their disease process (QE -> QI -> QR); susceptibles who are isolated cannot become infected and infected individuals who are isolated cannot cause infections. After the isolation period is over (14 days in the standard condition), individuals are returned to the general population, retaining their current disease state. Antigen and RT-PCR tests have a specified positive (ϕe) and negative percent agreement (ϕp) with viral culture (see text). ϕa is 0.0001, representing imperfect test specificity. Note that we use positive percent agreement for tests of infectious individuals (I) and negative percent agreement for both exposed (E) and recovered (R) individuals, as it has been shown that RT-PCR may detect infection among individuals who are no longer infectious . Here, the color and line-type of the arrows indicate whether or not the test was “correct” or “incorrect” with respect to infectiousness, with dotted lines indicating incorrect test results (i.e., false negative or false positive), red lines indicating a positive test, blue lines indicating a negative test, black lines indicating that no test result was returned.
In the testing process (Fig 1), we assume a fixed proportion of the population, τ, is tested each day (surveillance testing). The assumptions and parameter values for our baseline simulation settings are as follow (S1 Table in S1 File). We assume that susceptible individuals who test positive (TS; false positive) do so at a rate of ϕa = 0.0001 to account for imperfect test specificity. For tested exposed (TE) and tested recovered (TR) individuals, we use the negative percent agreement of each test compared to culture reported in published analyses for RT-PCR (ϕp = 95.5%) and for antigen tests (ϕp = 98.7%) . For tested infectious individuals (TI), we use the antigen test positive percent agreement ϕe = 96.4% and RT-PCR positive percent agreement ϕe = 100% reported in the same analysis . We assume that individuals who test positive are isolated (Q) and are returned from isolation after an average of 14 days (ω = 0.07 in Fig 1, though return from isolation is determined as 1/ω days after the start of isolation; see below); we also explore a shorter 10-day isolation (see S1 File). We assume that patients will reduce their mixing while awaiting test results and therefore are both less susceptible and less infectious. As a baseline assumption, we use a 50% reduction in mixing due to this partial quarantine (q). We also explore two additional possible reductions of 25% and 75% (S6, S7 Figs in S1 File). Other than this partial quarantine, tested individuals move through the disease course as normal. Test results for antigen testing are returned on the same day as test administration (θ = 1), while results are returned in 48 hours for RT-PCR (θ = 1/3), an approximation for the US average time for RT-PCR test result turnaround (as in ). We explore two additional possible RT-PCR test turnaround times of 1 and 4 days in the supplement.
We assume isolation resulting from a positive test result is perfect, and thus individuals can only progress through their disease process (QE -> QI -> QR); susceptibles who are isolated cannot become infected and infected individuals who are isolated cannot cause infections. After the isolation period is over (14 days in the standard condition), individuals are returned to the general population, retaining their current disease state.
We selected two settings to bookend the risk spectrum for communal living scenarios: a nursing home, where disease outcomes are more severe, and a university residence hall system, where disease outcomes are less severe. In the nursing home setting, the population was 101, hospitalization rate was 25% of infections, ICU admission rate was 35.3% of hospitalizations, and the fatality rate was 5.4% of total infections. In the residence hall system setting, the population was 3150, hospitalization rate was 3.9% of infections, ICU admission rate was 23.8% of hospitalizations, and the fatality rate was 0.01% of total infections (values reflect estimates from CDC pandemic planning scenarios and expert guidance; ).
We explore five testing strategies for each of these two settings and assumed asymptomatic screening of 1%, 2%, 5%, and 10% of the population each day. In each setting and for each testing strategy, we ran the simulation for a duration of 183 days. The five testing strategies are as follows:
- Test once with RT-PCR test (“stand-alone PCR”)
- Test once with antigen test (“stand-alone antigen”)
- Test once with antigen test, then retest all negative results with antigen test two days later (“reflex to antigen”)
- Test once with antigen test, then retest all symptomatic negative results with RT-PCR test (“reflex symptomatic to PCR”)
- No surveillance testing (“no testing”)
Testing strategy 4 relies on symptom presentation to flag symptomatic individuals with negative antigen tests for retesting. We assume that 60% of all infections are symptomatic (CDC 2020b). To account for the potential influence of other circulating respiratory infections, we also assume a background rate of 2% of the population expressing symptoms of influenza-like illness that are unrelated to but could be confused with infection with SARS-CoV-2 [21, 22]. Individuals expressing influenza-like illness symptoms are randomly drawn from the population each day with a 2% probability. Individuals expressing symptoms of COVID-19 or influenza-like illness are flagged for retesting with RT-PCR in this strategy. Those individuals flagged for retesting are quarantined in the same way as individuals awaiting test results.
To determine the relative impact of different testing strategies on the disease course, we compare a suite of metrics between the four testing strategies and against the output of the model in the absence of testing. The metrics reported are number of infections averted, hospitalizations averted, deaths averted, and the per test reduction in infections.
Since using testing metrics with respect to viral culture as a proxy for infectiousness is a novel approach to modeling infectious disease surveillance testing strategy effectiveness, we also conducted simulations using antigen test sensitivity and specificity with respect to RT-PCR test results. RT-PCR is capable of detecting levels of viral RNA that do not indicate infectiousness (i.e., “recovered” in our model) . While our main simulations use test positive percent agreement to determine test results exclusively for infectious individuals, in these alternate simulations, we use test sensitivity to determine test results for both infectious and recovered individuals less than 54 days after recovery, approximating the median duration of RT-PCR detectable viral shedding in one analysis of 36 patients . We use test specificity to determine test results for susceptible and exposed classes and recovered individuals more than 54 days after recovery in these alternate simulations. Test sensitivity and specificity are positive percent agreement with RT-PCR results, considered the testing standard. Therefore, RT-PCR sensitivity and specificity are 100%, whereas antigen test sensitivity was 84.7% and specificity was 99.5%, following published results .
We constructed the model in the R statistical environment . The discrete-time model includes flows between states that are determined by rates at the population level and by functions at the individual-level. The time step is one day. The probability of exposure is frequency dependent and varies at each time step and across individuals. It is the product of the infected and unquarantined proportion of the population at each time step and β plus the product of the infected and quarantined proportion, the mixing rate reduction due to quarantine (q), and β. β is determined by the product of random draws from uniformly-distributed values of R0 and γ. The probability of becoming infectious is determined by the rate σ. Recovery, however, is modeled not with a population-level rate (γ), but with an individual-based approach using a fixed duration of infectiousness; for each infection, a recovery day is designated that is 1/γ time steps from the day of infection. Similarly, tests are returned by defining a day of test return for each test that is 1/θ time steps from the day of testing, and return from isolation is determined by defining an end day to the isolation period as 1/ω days after isolation begins.
We conducted 10,000 stochastic simulations for each testing strategy in each setting.
To improve the decision support aspect of our model, we layered a cost effectiveness analysis on top of the epidemiological model to estimate the costs of each testing strategy relative to the relevant outcomes. We used parameter values estimated from literature sources and expert input (S2 Table in S1 File). All costs were expressed in 2021 US dollars and are from the perspective of the congregate setting decision makers. We estimated total testing and outcome costs per strategy and calculated Cost per Incremental Infection Avoided and Unnecessary Quarantine Costs. We also calculated Incremental Cost Effectiveness Ratios (ICER) to compare the four testing strategies with the no testing strategy using the following standard calculation: .
Testing strategy costs
We considered the direct cost of testing to be the product of the number of tests per day, price per test per day, personal protective equipment (PPE) costs per day, and labor costs per day. For each type of test, the total cost is the sum of these daily costs multiplied by the duration of the simulation. We assume that RT-PCR is sent to an external laboratory and therefore has no direct capital costs to the decision maker, and we assume PPE is the same for either type of testing. The no test strategy by definition has no direct testing costs.
Outcomes cost model
We then consider the direct costs to the decision makers who purchase and conduct testing in each of our two settings: universities and nursing homes. For nursing homes, staff absenteeism due to quarantine incurs costs measured in labor productivity loss, a cost limited to staff, and a conservative proxy for true costs which could include temporary staff and other costs. Positive test results in either residents or staff incur labor costs for the administrative burden of reporting positives and initiating cleaning protocols. For positive residents, additional PPE and labor costs are incurred due to assumed need for isolation and additional staff care for those residents. Total outcomes cost is therefore calculated as:whereand where labor cost is the product of the hourly wage and the number of hours worked for each healthcare worker type.
For university residence halls, the cost per infection (Oncosts) was provided based on expert input.
See S2 Table in S1 File for cost parameter values and references.
Compared to simulations without testing, all testing strategies reduced the peak and total infections in simulated epidemics (Fig 1). Greater reduction in infections was achieved with higher rates of daily screening. The relative differences between testing strategies’ performance in reducing infections were largely maintained across both nursing home and dormitory settings.
In the nursing home setting, no statistically significant differences were found in % infections averted across testing strategies at low levels of surveillance testing (1%, 2%). At high levels of surveillance testing (5%, 10%), the reflex to antigen strategy (34% and 53% averted) outperformed standalone PCR (26% and 44% averted), which outperformed standalone antigen (20% and 35% averted) (Figs 2 and 3). The reflex symptomatic negatives to PCR strategy did not statistically significantly improve performance versus standalone antigen in any case (Fig 2).