Last month we explained the logic of studies of diagnostic test accuracy. Studies of diagnostic test accuracy involve comparing the findings of the index test to a reference test. The degree of concordance of the findings of the index and reference tests provides a measure of the accuracy of the index test.
How can the accuracy of a diagnostic test be quantified? Somehow we have to come up with some numbers that say something about the concordance between the findings of the index test and the reference test. This task is easiest when each of the index test and the reference test can generate just one of two findings: a positive finding or a negative finding. Here we will restrict consideration to these sorts of tests, as they are the most common sorts of diagnostic tests. We say the test is positive when its findings suggest the person who was tested has the condition of interest, and we say the test is negative when its findings suggest the person who was tested does not have the condition of interest.
The most frequently reported measures of diagnostic test accuracy are sensitivity and specificity. Sensitivity is the probability that a person who has the condition of interest will test positive. We can estimate sensitivity by first identifying all of the people in the study who tested positive to the reference test (i.e., the people who really do have the condition of interest) and then calculating the proportion of these people who tested positive with the index test. Specificity is the probability that a person who does not have the condition of interest will test negative. We can estimate specificity by identifying all of the people in the study who tested negative with the reference test (the people who really do not have the condition of interest) and then calculating the proportion of these people who tested negative with the index test.
To find out more about how to use the findings of a diagnostic test accuracy paper visit the DiTA tutorials.