Fast Computation of AUC-ROC score

Area under ROC curve (AUC-ROC) is one of the most common evaluation metric for binary classification problems. We show here a simple and very efficient way to compute it with Python. Before showing the code, let's briefly describe what an evaluation metric is, and what AUC-ROC is in particular.

An evaluation metric is a way to assess how good a machine learning model is. It is used to compute one or more numbers that summarize how the machine learning model predictions compare to reality. In order to use an evaluation metric, one has to go thought these steps:

  1. Start with a set of labelled examples: each example is described by a set of features, and a target value. The goal is to learn how to compute from the features a value as close as possible to the target.

  2. Split the available examples set in a training set and a test set

  3. Build a model using the train set

  4. Use the model to predict the values for the test set

  5. Use an evaluation metric to summarize the difference between the predictions on the test set and the target for the test set.

When the target only takes two values we have a binary classification problem at hand. Example of binary classification are very common. For instance fraud detection where examples are credit card transactions, features are time, location, amount, merchant id, etc., and target is fraud or not fraud. Spam detection is also a binary classification where examples are emails, features are the email content as a string of words, and target is spam or not spam. Without loss of generality we can assume that the target values are 0 and 1, for instance 0 means no fraud or no spam, while 1 means fraud or spam.

For binary classification, predictions are also binary. Therefore, a prediction is either equal to the target, or is off the mark. A simple way to evaluate model performance is accuracy: how many predictions are right? For instance, if our test set has 100 examples in it, how many times is the prediction correct? Accuracy seems a logical way to evaluate performance: a higher accuracy obviously means a better model. At least this is what people think when they are exposed to the first time to binary classification problems. Issue is that accuracy can be extremely misleading.

Let's see why. Assume I have a binary classification problem, for instance fraud detection, and that I have a model with 99% accuracy. My model predicts the correct target correctly for 99 of the examples in the test set. It looks like I got a near perfect model, isn't it?

Well, what if reality is the following?

  • There is about 1% fraud in general, and in my test set there is exactly one fraudulent transaction.

  • My model predicts that no transaction in a fraud, ever

If you look at it, my model is correct 99% of the time. Yet it is absolutely useless.

In order to cope with this issue several alternative metrics have been proposed to replace accuracy, like precision, recall, F1 score. But these metrics, as well as accuracy, do not apply to many interesting and effective algorithms. These are algorithms that output a probability rather than a binary value. A probability close to 0 means that the algorithm thinks the target is 0, while a probability close to 1 means that the algorithm thinks the target is 1. Algorithms in this class include logistic regression, gradient boosted trees with log loss, and neural networks with cross entropy loss. One way to use these algorithm is to threshold their output: a probability under 0.5 is transformed into a 0, and a value above 0.5 is transformed into a 1. After thresholding any of the above metric can be used.

We used 0.5 as the threshold, but we could have used any other value between 0 and 1. A conservative value would be to use a threshold close to 0, for instance 0.1. This amounts to classify as non fraud or non spam only the examples that the algorithm is very confident about. And of course, depending on the threshold you use the evaluation metric will yield different values.

It would be nice to be able to evaluate the performance of a model without the need to select an arbitrary threshold. This is precisely what AUC-ROC is providing. I'll refer to wikipedia for the classical way of defining that metric. I will use a much simpler way here.

Let's first define some entities.

  • pos is the set of examples with target 1. These are the positive examples

  • neg is the set of examples with target 0. These are the negative examples.

  • p(i) is the prediction for example i. p(i) is a number between 0 and 1.

  • A pair of examples (i, j) is labelled the right way if i is a positive example, j is a negative example, and the prediction for i is higher than the prediction for j.

  • | s | is the number of elements in set s.

Then AUC-ROC is the count of pairs labelled the right way divided by the number of pairs:

  • AUC-ROC = | {(i,j), i in pos, j in neg, p(i) > p(j)} | / (| pos | x | neg |)

A naive code to compute this would be to consider each possible pair and count those labelled the right way. A much better way is to sort the predictions first, then visit the examples in increasing order of predictions. Each time we see a positive example we add the number of negative examples we've seen so far. We use the numba compiler to make it run fast:

import numpy as np 
from numba import jit

@jit
def fast_auc(y_true, y_prob):
    y_true = np.asarray(y_true)
    y_true = y_true[np.argsort(y_prob)]
    nfalse = 0
    auc = 0
    n = len(y_true)
    for i in range(n):
        y_i = y_true[i]
        nfalse += (1 - y_i)
        auc += y_i * nfalse
    auc /= (nfalse * (n - nfalse))
    return auc

On my macbook pro it runs about twice as fast as the corresponding sckit-learn function. A notebook with the code and a benchmark is available on github.

Last updated