Probability is the likelihood of a random outcome (called an “event”) and is written as P(event)= x, where x is a number ≥0 and ≤1. Using this notation, the probability of rolling a 2 with a fair 6-sided die is written as P(2)=1/6. A probability of 0 means that an event is impossible while a probability of 1 means that an event is certain. For example, the probability of rolling a 7 with a fair 6-sided die is 0 and the chance of rolling a whole number ≥1 and ≤6 is 1.
Conditional probabilities refer to the likelihood of an event occurring given that another event has occurred or will occur. For example, if we know that a die roll has resulted in an odd number i.e. the only possible outcomes are 1, 3, or 5, we know that P(4)=0 and that P(1)=1/3. Conditional probabilities have their own special notation: P(event2|event1)=x. In English, this means, given that Event1 has occurred, the probability of event2 occurring is equal to x, where x is again a number ≥0 and ≤1. Our odd number die rolling example would be written as P(1|odd roll)=1/3.
Examples
Conditional probabilities play an important role in science and medicine. For example, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), statistical power, alpha error (Type I) and beta (Type II) error can all be written as conditional probabilities.
sensitivity: P(T+|D+) = x, i.e. given that a patient has the disease, the probability of testing positive equals x
specificity: P(T-|D-) = x, i.e. given that a patient does not have the disease, the probability of testing negative equals x
PPV: P(D+|T+) = x, i.e. given that a patient tests positive, the probability of having disease equals x
NPV: P(D-|T-) = x, i.e. given that a patient tests negative, the probability of not having disease equals x
statistical power: P(clinical trial+|effect exists) = x, i.e. given that there is a difference between groups (e.g. screened vs unscreened test subjects), the probability of a clinical trial being positive equals x
alpha error: P(clinical trial+|effect does not exist) = x, i.e. given that there is no difference between groups (e.g. screened vs unscreened test subjects), the probability of a clinical trial being positive equals x
beta error: P(clinical trial-|effect exists) = x, i.e. given that there is a difference between groups (e.g. screened vs unscreened test subjects), the probability of a clinical trial being negative equals x
P-values are also conditional probabilities. Recall that p-values are calculated assuming the null hypothesis (H0) that there is no difference between diagnostic tests, treatments, outcomes etc. is true. Expressed as a conditional probability, p-values can be written as P(data observed|H0 true) = x where x is the probability of observing a difference at least as great between what you expected (no difference) and what you observed.
Bayes' theorem is another conditional probability and in medicine can be used to calculate the probability that a patient has a particular disease once new information, such as from diagnostic testing becomes available. There are many versions of Bayes' theorem. Its’ simplest form is:
P(A|B) = (P(A) x P(B|A))/P(B)
For positive predictive value, Bayes’ theorem is:
P(D+|T+)=[P(D+) x P(T+|D+)]/[P(D+) x P(T+|D+) + P(D-) x (T+|D-)]
And for negative predictive value, Bayes’ theorem is:
P(D-|T-)=[P(D-) x P(T-|D-)]/[P(D-) x P(T-|D-) + P(D+) x P(T-|D+)]
where P(D+) is the disease prevalence, P(D-) is 1-prevalence, P(T+|D+) is sensitivity, P(T+|D-) is 1-specificity, P(T-|D-) is specificity and P(T-|D+) is 1-sensitivity.