Conventional paternity test employs 21 short tandem repeat (STR) DNA markers to assess the competing hypothesis of kinship. The probability of paternity (CPI) is calculated using Bayes’ theorem, assuming a prior probability of 50%. Three scenarios of possible kinship can be drawn from the composite index (CPI): exclusion, inconclusive, or inclusion of the competing hypothesis.

In Contrast, the KinTouch Paternity+4 Test adopts the Mendel’s law of segregation to determine the genetic distance between the test subjects.  Genetic distance is assessed by comparing DNA sequence similarity through whole genome hybridization, taking into account of 3 million single nucleotide polymorphisms (SNPs) incurred naturally among all human genomes.  A 50% and a 25% DNA sequence similarity relate directly to 1st and 2nd degree genetic relatedness respectively, revealing a comprehensive kinship including paternity, siblings, half siblings, grandparents/grandchildren and aunts/uncles/nephews/nieces.

The attributes of these two tests are tabulated to compare the expediency, accuracy and reliability.

SAMPLECheek swabFinger touch
TEST SUBJECTMother-child-father triosChild-father duos
TARGET ALLELE21 short tandem repeats (STR)3 million single nucleotide polymorphisms (SNPs) in human genomes
PRINCIPLEBayes’ Theorem for paternity exclusionMENDEL’S law of segregation
TECHNIQUEPCR amplification and electrophoresisGenome crossover hybridization
APPROACHExclusion probabilityGenetic distance
KINSHIPPaternity/Maternity onlyAll-Encompassing

*Bayesian statistical methods use Bayes’ theorem to compute and update probabilities after obtaining new data. Bayes’ theorem describes the conditional probability of an event based on data as well as prior information or beliefs about the event or conditions related to the event.

Two major issues regarding uncertainty must be addressed in the statistical evaluation of DNA evidence. One is associated with the characteristics of a database, such as its size and whether it is representative of the appropriate population. The other might be called the subpopulation problem. In the first instance, inferences based on values in a database might be uncertain because the database is not compiled from a sample of the most relevant population or the sample is not representative. If the database is small, the values derived from it can be uncertain even if it is compiled from a scientifically drawn sample; this can be addressed by providing confidence intervals on the estimates. The second issue, the subpopulation problem, is broader than the first. Although the formulae might provide good estimates of the match probability for the average member of the population, they might not be appropriate for a member of an unusual subgroup.