5,323
edits
(Created page with "See the main Machine Learning page") |
No edit summary |
||
Line 1: | Line 1: | ||
Some notes on supervised learning | |||
==Metrics== | |||
===Precision and Recall=== | |||
Precision is (# correct) / (# predictions) or (true positive) / (true positive + false positive). | |||
Recall is (# correct) / (# ground truth) or (true positive) / (true positive + false negative). | |||
Precision measures how good your model is at negatives. 1.0 precision means the model did misidentify any negatives but may have missed some positives. | |||
Recall measure how good your model is at identifying all the positive examples. 1.0 recall means your model identified all the positives. | |||
F1 = 2 * precision * recall / (precision + recall) | |||
====Precision Recall Curve==== | |||
# Take all of the predictions and rank them by confidence. | |||
# Go down the predictions and compute the precision and recall. | |||
#* The recall will go up since you capture more and more values with more predictions. | |||
#* However the precision will go down since lower-confidence predictions are less likely to be correct. | |||
The area under the curve (AUC) is the average precision. | |||
Mean average precision (mAP) is the average AP over all classes. | |||
====ROC Curve==== |