Hi all,
Continuing from the my last post about performance analysis on classification machine learning techniques, in this article I will talk about a specific analysis called Discriminatory Power Analysis by using Receiver-Operating Characteristic Curves (ROC Curves).
First, let's review and introduce some concepts related to the classification problems. When we deal with systems that involves the detection, diagnostics or even the prediction of results, it is really important to check the obtained results in order to validate the discriminative power as good or not of a given analysis. However only depending on the quantization of hits and misses on a test group does not really tell how good a system is, since it depends on the quality and distribution of the test group data.
For instance, let's consider a fraud detection system, which outputs are 1 or 0, indicating whether a transaction has been classified as a fraudulent or not. Now suppose that we applied the system on a test group of 100 transactions which we already know which one was a fraud or not, and the system correctly identified 90% of conditions. Do you think it is a good performance, don't you ?
Actually it is not. One information is missing and it is how the test group was distributed. Let's consider that actually 90% was fraudulent. The system, thus, could have considered '1' to every input given, and still have achieved a 90% accuracy rate, as the 10% transactions left were not fraudulent. By now, we cannot be sure if the system was really good or worked as a random classifier considering everyone as fraudulent without any previous knowledge or calculations.
In this case, we have to consider another metrics in order to evaluate this unbalance in the test groups. As I said in a previous post in this blog about confusion matrix, it will act as a base for the measures used in the discriminatory power analysis in question.
1. Accuracy
It measures the proportion of correct predictions considering the positive and negative inputs. It is highly dependant of the data set distribution which can easily lead to wrong conclusions about the system performance.
ACC = TOTAL HITS/ NUMBER OF ENTRIES IN THE SET
= (TP + TN) / (P + N)
2. Sensitivity
It measures the proportion of the true positives, that is, the ability of the system on predicting the correct values in the cases presented.
SENS = POSITIVE HITS/ TOTAL POSITIVES
= TP/ (TP+FN)
3. Specificity
It measures the proportion of the true negatives, that is, the ability of the system on predicting the
correct values for the cases that are the opposite to the desired one.
SPEC = NEGATIVE HITS/ TOTAL NEGATIVES
= TN/ (TN +FP)
4. EfficiencyIt is represented by the mean of Sensibility and Specificity. The perfect decision would be with 100% of specificity and 100% sensitivity. However this situation is rarely conceived, so a balance between both metrics must be obtained. This metric is a good evaluator for measure the responsiveness in a overall situation to the production of false positives and false negatives. Generally, when the system is too responsive to positive, it tends to produce many false positives and vice-versa.
EFF = (SENS + SPEC) /2
5. Positive Predictive Value
This measure indicates the estimation of how good the system is when making a positive affirmation. However it is not recommended to use it alone, causing easily to lead to wrong conclusions about system performance. It is the proportion of the true positives in contrast with all positive predictions.
PPV = POSITIVE HITS / TOTAL POSITIVE PREDICTIONS
= TP / (TP + FP)
6. Negative Predictive Value
This measure indicates the estimation of how good the system is when making a negative affirmation. However it is not recommended to use it alone, causing easily to lead to wrong conclusions about system performance. It is the proportion of the true negatives in contrast with all negative predictions.
NPV = NEGATIVE HITS / TOTAL NEGATIVE PREDICTIONS
= TN / (TN + FN)
7. Phi (φ) Coefficient
It is a measure not commonly used and measure the quality of the confusion matrix in a single value which can be compared. For instance, if you have two binary classifications with same classes but with different sizes, it can be used to compare both of them. It returns a value between -1 and +1, where +1 represents a perfect prediction, 0 a random prediction, and -1 an inverse prediction.
φ = (TP*TN - FP*FN) / sqrt( (TP+FP) * (TP +FN) * (TN+FP) * (TN +FN) )
Source Code
As I presented in the first post of this series, I've updated the confusion matrix by adding the new metrics presented above.
View the script at Gist.Github
The Receiver Operating Characteristic (ROC) Curve
The ROC curve was developed during the World War II and was extremely used by engineers to detect enemy objects in enemy fields. It became famous and widely used in other areas such as medicine, radiology, etc. More recently, it has been introduced to the area of machine learning and dataming.
The main idea behind the ROC curves is to analyze the output from the classification systems, which are generally continuous. Because of that, it is necessary to define a cut-off value (or a discriminatory threshold) to classify and count the number of positive and negative predictions (such as the fraudulent or legal transactions in the case of the statuses in bank transactions). This threshold can be arbitrarily determined, so the best way to compare the performance of different classifiers is to select several cutoff values over the output data and study their effect.
Considering many thresholds, it is possible to calculate a set of pairs (sensitivity, 1-specificity) which can be plotted in a curve. This curve will be the ROC curve for the system where the y-axis (ordenades) represents the sensitivity and the x-axis (abscissas) represents the complement of specificity (1 - specificity).
Examples of ROC Curves are presentend below.
Examples of ROC Curves (From: CezarSouza's Blog) |
As you can see in the figure above, higher the ROC curve's area, better the system. This is a standard measure for classifiers comparison, which is called the area under the ROC Curve (AUC). It is evaluated by a numerical integration, such as, for example, the trapezoidal rule.
Now let's go to the development of a simple ROC curve in Python. During a data mining project that I worked on last year, I decided to implement a ROC curve in order to analyze the performance of some classifiers. This work resulted into a open-source library called PyROC which you can use inside your own applications for the creation, visualization and analysis of ROC curves.
Source Code
The project is hosted at my personal repository at GitHub.
Feel free to download the source code and use in your applications.
Using the code
To start using the code in your projects, just create a new ROCData object passing as argument the list of tuples containing the actual data, as measure by the experiment, and the predicted data as given by the prediction classsifier. The actual data must be a dichotomous variable, with only two possible valid values. Generally it is assumed the values 0 and 1 for represent the two states. For the test data, given by the classifier their values must be continuous and the range between the highest and lowest values considered in the actual data. For instance, if the values are 0 and 1, the test data values must be inside the [0,1] range.
from pyroc import * random_sample = random_mixture_Model() # Generate a custom set randomly print random_sample[(1, 0.53543926503331496), (1, 0.50937533997469853), (1, 0.58701681878005862), (1, 0.57043399840000497), (1, 0.56229469766270523), (1, 0.6323079028948545), (1, 0.72283523937059946), (1, 0.55079104791257383), (1, 0.59841921172330748), (1, 0.63361144887035825)]
To compute the area under the curve (AUC) and plot the ROC curve just call the RocData object's methods auc() and plot() . You can also pass the desired number of points to use for different cutoff values.
#Example instance labels (first index) with the decision function , score (second index) #-- positive class should be +1 and negative 0. roc = ROCData(random_sample) #Create the ROC Object roc.auc() #get the area under the curve 0.93470000000000053 roc.plot(title='ROC Curve') #Create a plot of the ROC curve
Some metrics are also available evaluated from the confusion matrix such as accuracy, specificity, sensitivity, etc.
#threshold passed as parameter - the cutoff value. (0.5)
roc.confusion_matrix(0.5)
{'FP': 18, 'TN': 82, 'FN': 18, 'TP': 82}
roc.confusion_matrix(0.5,True)
Actual class
+(1) -(0)
+(1) 82 18 Predicted
-(0) 18 82 class
{'FP': 18, 'TN': 82, 'FN': 18, 'TP': 82}
r1.evaluateMetrics(r1.confusion_matrix(0.5))
{'ACC': 0.81000000000000005, 'PHI': 0.62012403721240439, 'PPV': 0.81632653061224
492, 'SENS': 0.80000000000000004, 'NPV': 0.80392156862745101, 'SPEC': 0.81999999
999999995, 'EFF': 0.81000000000000005}
At least if you wanna plot two curves in the same chart, it is also possible by passing a list of RocData objects. It is useful when you want to compare different classifiers.
x = random_mixture_model()
r1 = ROCData(x)
y = random_mixture_model()
r2 = ROCData(y)
lista = [r1,r2]
plot_multiple_roc(lista,'Multiple ROC Curves',include_baseline=True)
There is also another option for you to use the script : stand-alone. Just run at the terminal the command below to see the available options.
DemonstrationRun at your terminal python pyroc.py |
Let's see some ROC curves plotted in action.
I've finished the second part of the performance analysis on machine learning classifiers. This is a simple introduction, so if you want a better understanding of how ROC curves work and its meaning, please take a look at this website about ROC curves and its applications. It includes many applets for you experiment with the curves. I expect you have enjoyed this post. The next one in this series will be about the F1-Score also known as F-measure.
References
http://crsouza.blogspot.com/2009/12/performing-discriminant-power-analysis.html
http://crsouza.blogspot.com/2010/01/performing-discriminant-power-analysis.html
http://crsouza.blogspot.com/2009/12/performing-discriminant-power-analysis.html
http://crsouza.blogspot.com/2010/01/performing-discriminant-power-analysis.html