Supervised Learning: Difference between revisions
Created page with "See the main Machine Learning page" |
|||
| (2 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
Some notes on supervised learning | |||
==Metrics== | |||
===Precision and Recall=== | |||
[https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall Google ML Crash Course Precision and Recall] | |||
Precision is (# correct) / (# predictions) or (true positive) / (true positive + false positive). | |||
Recall is (# correct) / (# ground truth) or (true positive) / (true positive + false negative). | |||
Precision measures how good your model is at negatives. 1.0 precision means the model did misidentify any negatives but may have missed some positives. | |||
Recall measure how good your model is at identifying all the positive examples. 1.0 recall means your model identified all the positives. | |||
Recall is also known as sensitivity. | |||
F1 = 2 * precision * recall / (precision + recall) | |||
====Precision Recall Curve==== | |||
# Take all of the predictions and rank them by confidence. | |||
# Go down the predictions and compute the precision and recall. | |||
#* The recall will go up since you capture more and more values with more predictions. | |||
#* However the precision will go down since lower-confidence predictions are less likely to be correct. | |||
The area under the curve (AUC) is the average precision. | |||
Mean average precision (mAP) is the average AP over all classes. | |||
====ROC Curve==== | |||
[https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc Google ML Crash Course ROC and AUC] | |||
True Positive Rate (TPR) = (true positive) / (true positive + false negative). | |||
False Positive Rate (FPR) = (false positive) / (false positive + true negative). | |||
An ROC curve plots TPR vs FPR. | |||