The** p-value** is defined as the probability in observing a value or effect equivalent to a value or effect observed when the null hypothesis is true. In other words, the p-value is based on the assumption that the null hypothesis is true

By convention, p-value ≤0.05 is considered statistically significant. It should be noted that statistically significant is not the same as 'highly-significant' as this implies the p-value is a measure of treatment effectiveness which it is not.

The p-value is used in deciding if the null hypothesis made before the start of the study should be accepted or rejected (i.e. it is an indication of the likelihood of making a type I error) ^{1}. The p-value gives no information regarding type II error.

#### Practical points

- if a statistic is appropriately equal to or below the p-value cutoff, this does not mean that the results of the study are valid or that a study is meaningful; it only tells you that the difference between the mean values of the test group and control group would occur by chance only ≤5% of the time
- p=0.001 is not necessarily a 50x better study than p=0.05, the difference between the means must be taken into account
- e.g. a study that shows 1% increased enhancement with a new MRI imaging agent at p=0.001 is not nearly as interesting as a new imaging agent that shows 20% increased enhancement at p=0.05; however, in the former study there is a considerably reduced probability that the observed difference occurred by chance