A ROC curve is a graph that plots true positive rates against false positive rates for a series of cutoff values, or in other words, graphically displays the trade-off between sensitivity and specificity for each cutoff value. An ideal cutoff might give the test the highest possible sensitivity with the lowest possible false positive rate (i.e., highest specificity). This is the point lying geometrically closest to the top-left corner of the graph (where the ideal cutoff value with 100% sensitivity and specificity would be plotted). Picking the ideal cutoff score is, to some extent, dependent on the clinical context, that is the purpose for which the tool will be used. The area under an ROC curve can be used as an overall estimate of its discriminating ability and sometimes is expressed as accuracy. The area under the ROC curve is equal to the probability that a test correctly classifies patients as true positives or true negatives. Greater areas under the curve indicate higher accuracy. To further clarify, a discriminant test might have an area under the curve of 0.7 while a nondiscriminant test has an area under the curve of 0.5.Rosenberg, L., Joseph, L., & Barkun, A. (2000). Surgical Arithmetic: Epidemiological, Statistical and Outcome-Based Approach to Surgical Practice. Georgetown, Texas: Landes Bioscience.