How to evaluate a classifier?
The classifier can be evaluated by building the confusion matrix. Confusion matrix shows the total number of correct and wrong predictions.
Confusion Matrix for class label positive(+VE) and negative(-VE)is shown below;
|Predicted Class (Model)||+VE||A =|
|P=A / (A+B)|
|D / (C + D)|
A + D / (A + B + C + D)
|A / (A + C)||D / (B + D)|
Accuracy is the proportion of the total number of correct predictions.
Accuracy = A + D / (A + B + C + D)
Error Rate = 1 – Accuracy
+VE predictions are the proportion of the total number of correct positive predictions.
+VE predictions = A / (A+B)[quads id=2]
-VE predictions are the proportion of the total number of correct negative predictions.
-VE predictions = D / (C + D)
Precision is the correctness that how much tuple are
- +VE and classifier predicted them as +VE
- -VE and classifier predicted them as -VE
Precision = A / P
Recall = A / Real positive
Sensitive is the total True +VE rate.
The correction of the actual positive cases that are correctly identified.
Sensitivity (Recall) = A / (A + C)
F-Measure is harmonic mean of recall and precision.
F-Measure = 2 * Precision * Recall / Precision + Recall
Specificity is true -VE rate.
Specificity is the proportion of the actual -VE cases that are correctly identified.
Specificity = D / (B + D)
Note: Specificity of one class is same as the sensitivity of the other class.