Items tagged “statistics”
45 results found
Sensitivity is one of the 4 basic diagnostic test metrics in addition to specificity, positive predictive value and negative predictive value. Sensitivity is a measure of how good a diagnostic test is at detecting disease when it is present and is calculated by dividing the number of true positi...
Specificity is one of the 4 basic diagnostic test metrics in addition to sensitivity, positive predictive value and negative predictive value. Specificity is a measure of how good a diagnostic test is at identifying people who are healthy and is calculated by dividing the number of true negative...
Positive predictive value
Positive predictive value (PPV) is one of the 4 basic diagnostic test metrics in addition to sensitivity, specificity and negative predictive value. Positive predictive value is a measure of how often someone who tests positive for disease actually has disease and is calculated by dividing the n...
Negative predictive value
Negative predictive value (NPV) is one of the 4 basic diagnostic test metrics in addition to sensitivity, specificity and positive predictive value. Negative predictive value is a measure of how often someone who tests negative for disease does not have disease and is calculated by dividing the ...
The p-value is defined as the probability in observing a value or effect equivalent to a value or effect observed when the null hypothesis is true. In other words, the p-value is based on the assumption that the null hypothesis is true By convention, p-value ≤0.05 is considered statistically si...
Receiver operating characteristic curve
The receiver operating characteristic (ROC) curve is a statistical relationship used frequently in radiology, particularly with regards to limits of detection and screening. The curves on the graph demonstrate the inherent trade-off between sensitivity and specificity: y-axis: sensitivity x-a...
Receiver operating characteristic (ROC) curve
Diagnosis not applicable
Published 10 Mar 2015
Sensitivity and specificity
Sensitivity and specificity are fundamental characteristics of diagnostic imaging tests. The two characteristics derive from a 2x2 box of basic, mutually exclusive outcomes from a diagnostic test: true positive (TP): an imaging test is positive and the patient has the disease/condition false ...
Sensitivity and specificity of multiple tests
Sensitivity and specificity of multiple tests is a common statistical problem in radiology because frequently two tests (A and B) with different sensitivities and specificities are combined to diagnose a particular disease or condition. These two tests can be interpreted in an "and" or an "or" ...
Lead time bias
Lead time bias is a bias that may be encountered in radiology literature on imaging detection of disease. Lead time is the time between detection of a disease with imaging and its usual clinical presentation. An imaging technique or modality may claim to lengthen survival time by earlier detect...
Length time bias
Length time bias can be encountered in the radiology literature, particular with regard to imaging screening. With length time bias, screening for a disease (D) appears more effective for a more indolent presentation of a disease (D1) than for quickly-symptomatic and quickly-fatal presentation ...
The normal distribution (or bell curve or Gaussian distribution) is a type of data spread that is encountered frequently in radiology and in other sciences. Data that are normally distributed can be evaluated using parametric statistics. When data are not normally distributed (e.g. skewed, or m...
Type I error
Type I errors (alpha errors, α) occur when we accept that there is a difference between two experimental groups, when in fact, no difference exists. The threshold for accepting a type I error is the p-value. The traditionally accepted p-value of 0.05 indicates that the researchers are willing t...
Type II error
Type II errors (beta errors, β) occur when we accept that there is no difference between two experimental groups, when in fact, there is a difference. The p-value does not give a direct indication of the likelihood of a type II error; if the p-value is >0.05, this does not necessarily mean that...
Bias refers to a methodological flaw in a research study which prevents generalization of a sample population out to the entire population. It is a systematic error. Errors in radiology research studies fall into one of two categories: random error systematic error/bias Random error cannot b...
The power of a clinical trial is the probability that the trial will find a difference between groups if there is one. Power can be defined as the probability of a true positive trial result and is often written as: power = (1 - β) where β is the probability of missing a difference between gro...
Statistics for radiology
Diagnosis not applicable
Published 24 Mar 2015
Z-scores are a way to translate individual data points into terms of a standard deviation. Z = (X - Xbar) / σ X: individual data point Xbar: the arithmetic mean σ: the standard deviation The purpose of the Z-score is to allow comparison between values in different normal distributions. Two...
Standard error of the mean
The standard error of the mean, SE(M) is a fundamental concept in hypothesis testing. When you pick a random sample out of a population (say a 100 data point sample out of a 10,000 data point population), what is the mean value of that sample? It's going to want to tend toward the population me...
Confidence intervals are often used in radiology literature to express the variability of an experimental result. They are usually reported as the upper and lower bound of variability (upper,lower) for your mean value, with x% certainty 1. If 95%, it means that if the study were redone many tim...