Supervised Learning: Difference between revisions
No edit summary |
|||
| (One intermediate revision by the same user not shown) | |||
| Line 3: | Line 3: | ||
==Metrics== | ==Metrics== | ||
===Precision and Recall=== | ===Precision and Recall=== | ||
[https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall Google ML Crash Course Precision and Recall] | |||
Precision is (# correct) / (# predictions) or (true positive) / (true positive + false positive). | Precision is (# correct) / (# predictions) or (true positive) / (true positive + false positive). | ||
Recall is (# correct) / (# ground truth) or (true positive) / (true positive + false negative). | Recall is (# correct) / (# ground truth) or (true positive) / (true positive + false negative). | ||
Precision measures how good your model is at negatives. 1.0 precision means the model did misidentify any negatives but may have missed some positives. | Precision measures how good your model is at negatives. 1.0 precision means the model did misidentify any negatives but may have missed some positives. | ||
Recall measure how good your model is at identifying all the positive examples. 1.0 recall means your model identified all the positives. | Recall measure how good your model is at identifying all the positive examples. 1.0 recall means your model identified all the positives. | ||
Recall is also known as sensitivity. | |||
F1 = 2 * precision * recall / (precision + recall) | F1 = 2 * precision * recall / (precision + recall) | ||
| Line 21: | Line 24: | ||
====ROC Curve==== | ====ROC Curve==== | ||
[https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc Google ML Crash Course ROC and AUC] | |||
True Positive Rate (TPR) = (true positive) / (true positive + false negative). | |||
False Positive Rate (FPR) = (false positive) / (false positive + true negative). | |||
An ROC curve plots TPR vs FPR. | |||