Sensitivity

Last revised by Stefan Tigges on 6 Jan 2024

Sensitivity is one of the 4 basic diagnostic test metrics in addition to specificity, positive predictive value and negative predictive value. Sensitivity is a measure of how good a diagnostic test is at detecting disease when it is present and is calculated by dividing the number of true positives (TP) by the number of people with disease, i.e. true positives and false negatives (FN):

  • TP/(TP + FN)

The formula shows that a high sensitivity is achieved by maximizing true positives and minimizing false negatives.  

Sensitivity is also called the true positive rate and can be expressed as a conditional probability:

  • P(Test positive|Disease positive)

Good screening tests must have high sensitivity for 2 reasons. First, a high sensitivity ensures that test subjects with disease have a high likelihood of testing positive. Second, since a sensitive test has few false negatives, subjects testing negative are likely to be true negatives. These characteristics of sensitive tests explain why sensitive tests are good at both picking up and, when negative, ruling out disease.    

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.