The quest for higher sensitivity and lower false positive rates are constant motivators in clinical NGS development. While computational optimization can be made to improve one or the other of these objectives, in general improving one metric requires compromise in the other. One tactic currently in use is to optimize detection for a narrowly focused panel, sometimes only a dozen or so clinically important sites, at the expense of (and essentially ignoring) any false positives elsewhere in the read set. We have performed a systematic evaluation of the errors caused by library construction and sequencing methodology with the goal of identifying and minimizing physical and computational errors in our targeted NGS libraries. The combination of these advances allow us to reliably call mutations at levels far below the typical 1% limit, not just for a small select group of sites, but over much larger regions, such as the complete CDS of clinically relevant genes. Critically, false positives are kept at exceptionally low levels through a combination of the enzymology used in library construction, the use of advanced molecular indices, single primer extension (SPE) technology and the application of a computational model for enzyme induced errors.
E. Lader, R. Samara, Z. Wu, J. Ning, Y. Wang, J. Dicarlo