Supervised Learning

From David's Wiki
Revision as of 15:23, 30 June 2021 by David (talk | contribs) (→‎Metrics)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Some notes on supervised learning

Metrics

Precision and Recall

Google ML Crash Course Precision and Recall

Precision is (# correct) / (# predictions) or (true positive) / (true positive + false positive).
Recall is (# correct) / (# ground truth) or (true positive) / (true positive + false negative).

Precision measures how good your model is at negatives. 1.0 precision means the model did misidentify any negatives but may have missed some positives.
Recall measure how good your model is at identifying all the positive examples. 1.0 recall means your model identified all the positives.
Recall is also known as sensitivity.

F1 = 2 * precision * recall / (precision + recall)

Precision Recall Curve

  1. Take all of the predictions and rank them by confidence.
  2. Go down the predictions and compute the precision and recall.
    • The recall will go up since you capture more and more values with more predictions.
    • However the precision will go down since lower-confidence predictions are less likely to be correct.

The area under the curve (AUC) is the average precision.
Mean average precision (mAP) is the average AP over all classes.

ROC Curve

Google ML Crash Course ROC and AUC

True Positive Rate (TPR) = (true positive) / (true positive + false negative).
False Positive Rate (FPR) = (false positive) / (false positive + true negative).

An ROC curve plots TPR vs FPR.