Bayes theorem, also known as Bayes rule or Bayes law, is a theorem in statistics that describes the probability of one event or condition as it relates to another known event or condition. Mathematically, the theory can be expressed as follows: P(A|B) = (P(B|A) x P(A) )/P(B), where given that P(B) is not zero, P(A|B) is the conditional probability of A given B is true, P(B|A) is the conditional probability of B given A is true, and P (B) and P(A) are the probabilities of observing A and B independently.
Traditionally in medicine, Bayes theorem has often been taught as it applies to decisions on diagnosis, e.g. if a patient has a certain clinical presentation before a diagnostic imaging test, given the prevalence of a disease, a radiologist can estimate the prior or pretest probability of the disease and then assess a the imaging in light of this probability. Positive predictive value and negative predictive value can be derived using the theorem. Bayes theorem also has utility in terms of the diagnosis of entities with similar imaging appearances. A concrete example of this utility would be that upon finding imaging characteristics of a rare diagnosis that might also appear in a common diagnosis, if the relative frequency of the diagnoses in the population is known, then the probability that these characteristics represent the rare diagnoses can be calculated, although in terms of practical clinical application the clinical context would weigh in a final calculation as well.
Algorithms that include Bayesian methodologies are at work in some MRI and perfusion weighted imaging processing software for noise reduction and other technical tasks. Bayes Theorem underpins Bayesian inference which is used in many types of algorithms for radiology related AI, specifically Bayesian network algorithms.
History and etymology
The theory was devised by the British statistician, philosopher, and church minister, Thomas Bayes (1701-1761).