ROC curve

Steve Simon


This page is currently being updated from the earlier version of my website. Sorry that it is not yet fully available.

*Dear Professor Mean

To understand an ROC curve

The two percentages listed above are the false negative and false positive rates

Short explanation

An ROC curve is a graphical representation of the trade off between the false negative and false positive rates for every possible cut off. Equivalently

By tradition

So how can you tell a good ROC curve from a bad one?

All ROC curves are good

We are usually happy when the ROC curve climbs rapidly towards upper left hand corner of the graph. This means that 1- the false negative rate is high and the false positive rate is low. We are less happy when the ROC curve follows a diagonal path from the lower left hand corner to the upper right hand corner. This means that every improvement in false positive rate is matched by a corresponding decline in the false negative rate.

You can quantify how quickly the ROC curve rises to the upper left hand corner by measuring the area under the curve. The larger the area

Area under the curve does have one direct interpretation. If you take a random healthy patient and get a score of X and a random diseased patient and get a score of Y

Show me an example of an ROC curve.

Consider a diagnostic test that can only take on five values

It’s a bit easier if we convert this table to cumulative percentages.

We add a row (*) to represent the cumulative percentage of 0% which will end up corresponding to a diagnostic test where all the results are considered positive regardless of the test value. The last row represents the other extreme

The complementary percentages shown above represent the true positive rate (or Sn) and the the false positive rate (or 1-Sp).

This table includes two extreme cases for the sake of completeness. If you always classify a test as positive

Here is what the graph of the ROC curve would look like.

Here is information about Area Under the Curve. This area (0.91) is quite good; it is close to the ideal value of 1.0 and much larger than worst case value of 0.5.

Here are the actual values used to draw the ROC curve (I selected the “Coordinate points of the ROC Curve” button in SPSS).

Here is the same ROC curve with annotations added

Shown below is an artificial ROC curve with an area equal to 0.5. Notice that each gain in sensitivity is balanced by the exact same loss in specificity and vice versa. Also notice that this curve includes the point corresponding to 50% for both sensitivity and specificity. You could achieve this level of diagnostic ability by flipping a coin. Clearly

What’s a good value for the area under the curve?

Deciding what a good value is for area under the curve is tricky and it depends a lot on the context of your individual problem. One way to approach the problem is to examine what some of the likelihood ratios would be for various areas. A good test should have a LR+ of at least 2.0 and a LR- of 0.5 or less. This would correspond to an area of roughly 0.75. A better test would have likelihood ratios of 5 and 0.2, respectively

These are very rough guidelines; further work on refining these would be appreciated.


The ROC curve plots the false positive rate on the X axis and 1 - the false negative rate on the Y axis. It shows the trade-off between the two rates. If the area under the ROC curve is close to 1

Further reading

  1. Quantifying the information value of clinical assessments with signal detection theory. Richard M. McFall
  1. The magnificent ROC (Receiver Operating Characteristic curve). Jo van Schalkwyk. Accessed on 2003-09-08.
  2. Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. MH Zweig
  1. Accuracy of clinical diagnosis of cirrhosis among alcohol-abusing men. K. J. Hamberg
  1. Comparing diagnostic tests: a simple graphic using likelihood ratios. B. J. Biggerstaff. Statistics in Medicine 2000: 19(5); 649-63. [Medline]]( [Abstract]](
  2. Slopes of a receiver operating characteristic curve and likelihood ratios for a diagnostic test. BCK Choi. AJE 1998: 148(11); 1127-32. [Medline]](
  3. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. E. R. De Long
  1. Analysis of correlated ROC areas in diagnostic testing. H. H. Song. Biometrics 1997: 53(1); 370-82. [Medline]](
  2. Receiver Operating Characteristic (ROC) Literature Research. Kelly H. Zou
  1. Published examples of ROC curves
  2. The influence of prostate volume on the ratio of free to total prostate specific antigen in serum of patients with prostate carcinoma and benign prostate hyperplasia. C. Stephan
  1. Diagnostic Accuracy of Four Assays of Prostatic Acid Phosphatase: Comparison Using Receiver Operating Characteristic Curve Analysis. JL Carson
  1. The ratio of free to total serum prostate specific antigen and its use in differential diagnosis of prostate carcinoma in Japan. S. Egawa
  1. Using the Hospital Anxiety and Depression Scale to screen for psychiatric disorders in people presenting with deliberate self-harm. D. Hamer
  1. **Screening for anxiety
  1. Diagnostic markers of infection: comparison of procalcitonin with C reactive protein and leucocyte count. M. Hatherill
  1. Using fasting plasma glucose concentrations to screen for gestational diabetes mellitus: prospective population based study. D Perucchini
  1. Sensitivity and specificity of observer and self-report questionnaires in major and minor depression following myocardial infarction. J. J. Strik
  1. Diagnosis of Creutzfeldt-Jakob disease by measurement of S100 protein in serum: prospective case-control study. M. Otto

You can find an earlier version of this page on my original website.