Cost considerations for efficient group testing studies

  title={Cost considerations for efficient group testing studies},
  author={Shih‐Hao Huang and Mong-Na Lo Huang and Kerby Shedden},
  journal={Statistica Sinica},
A group testing study involves collecting samples from multiple individuals, pooling them, and testing them as a group. A realistic cost model for such a study should consider the costs both for collecting the samples, and for running the assays. Moreover, an efficient design should accommodate inaccuracies in any prespecified nominal test sensitivity and specificity values, and allow them to vary with group size. In this work, we derive locally optimal designs in this setting, and characterize… 

Figures and Tables from this paper

Optimal group testing designs for prevalence estimation combining imperfect and gold standard assays

We consider group testing studies where a relatively inexpensive but imperfect assay and a perfectly accurate but higher-priced assay are both available. The primary goal is to accurately estimate

Group Testing with Consideration of the Dilution Effect

We propose a method of group testing by taking dilution effects into consideration. We estimate the dilution effect based on massively collected RT-PCR threshold cycle data and incorporate them into



Optimal group testing designs for estimating prevalence with uncertain testing errors

It is demonstrated that the proposed locally D- and Ds -optimal designs have high efficiencies even when the prespecified values of the parameters are moderately misspecified.

Regression models for group testing data with pool dilution effects.

The new approach, which exploits the information readily available from underlying continuous biomarker distributions, provides reliable inference in settings where pooling would be most beneficial and does so even for larger pool sizes.

A Two-Stage Adaptive Group-Testing Procedure for Estimating Small Proportions

Abstract A method for adaptively estimating a proportion p using group-testing procedures is presented and analyzed, with emphasis placed on a two-stage procedure. This estimator is compared to the

Optimal designs for dose finding studies with an active control

Dose finding studies often compare several doses of a new compound with a marketed standard treatment as an active control. In the past, however, research has focused mostly on experimental designs

Optimum Experimental Designs, with SAS

This book presents the theory and methods of optimum experimental design, making them available through the use of SAS programs, and stresses the importance of models in the analysis of data and introduces least squares fitting and simple optimum experimental designs.

Informative Dorfman Screening

This article uses individuals’ risk probabilities to formulate new informative decoding algorithms that implement Dorfman retesting in a heterogeneous population, and introduces the concept of “thresholding” to classify individuals as “high” or “low risk,” so that separate, risk‐specific algorithms may be used, while simultaneously identifying pool sizes that minimize the expected number of tests.

Prevalence estimation subject to misclassification: the mis‐substitution bias and some remedies

This article proposes simple designs and methods for prevalence estimation that do not require known values of assay sensitivity and specificity, and develops methods for estimating parameters and for finding or approximating optimal designs.

Cost analysis in choosing group size when group testing for Potato virus Y in the presence of classification errors

An analysis of diagnostic test data in which specimens are grouped for batched testing to offset costs and it is apparent that the Bayesian method can truly update the prior information to more closely approximate the intrinsic characteristics of the parameters of interest.

Group Testing With Blockers and Synergism

Discovery and development of a new drug can cost hundreds of millions of dollars. Pharmaceutical companies have used group testing methodology routinely as one of the efficient high throughput