Перейти к файлу
gegeo0 37491aa97b
Merge pull request #40 from microsoft/gegeo/add_confusion_matrix
adding confusionmatrix evaluator
2023-03-09 15:59:53 -08:00
.github/workflows fix dependencies and home page link (#31) 2022-08-30 16:08:05 -07:00
test added more tests 2023-03-09 14:45:45 -08:00
vision_evaluation addressed pr comments 2023-03-09 14:42:13 -08:00
.gitignore Pinjin/make test pipeline work (#24) 2022-04-04 16:34:31 -07:00
CODE_OF_CONDUCT.md Initial CODE_OF_CONDUCT.md commit 2020-11-23 18:18:22 -08:00
LICENSE Add evaluations for IC and OD (#1) 2020-11-24 10:17:44 -08:00
README.md Merge pull request #40 from microsoft/gegeo/add_confusion_matrix 2023-03-09 15:59:53 -08:00
SECURITY.md Microsoft mandatory file 2022-05-23 15:55:48 -07:00
requirements.txt fix dependencies and home page link (#31) 2022-08-30 16:08:05 -07:00
setup.py Merge pull request #40 from microsoft/gegeo/add_confusion_matrix 2023-03-09 15:59:53 -08:00
tox.ini Pinjin/make test pipeline work (#24) 2022-04-04 16:34:31 -07:00

README.md

Vision Evaluation

Introduction

This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, image caption, and image matting.

If you only need the image classification or object detection evaluation pipeline, JRE is not required. This repo

  • contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification and object detection.
  • defines the contract for metric calculation code in Evaluator class, for bringing custom evaluators under the same interface

This repo isn't trying to re-invent the wheel, but to provide centralized defaults for most metrics across different vision tasks so dev/research teams can compare model performance on the same page. As expected, you can find many implementations backed up by well-known sklearn or pycocotools.

Functionalities

This repo currently offers evaluation metrics for three vision tasks:

  • Image classification:
    • TopKAccuracyEvaluator: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.
    • ThresholdAccuracyEvaluator: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.
    • AveragePrecisionEvaluator: computes the average precision, i.e., precision averaged across different confidence thresholds.
    • PrecisionEvaluator: computes precision.
    • RecallEvaluator: computes recall.
    • BalancedAccuracyScoreEvaluator: computes balanced accuracy, i.e., average recall across classes, for multiclass classification.
    • RocAucEvaluator: computes Area under the Receiver Operating Characteristic Curve.
    • F1ScoreEvaluator: computes f1-score (recall and precision will be reported as well).
    • EceLossEvaluator: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
    • ConfusionMatrixEvaluator computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix). .
  • Object detection:
    • CocoMeanAveragePrecisionEvaluator: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).
  • Image caption:
  • Image matting:
    • MeanIOUEvaluator: computes the mean intersection-over-union score.
    • ForegroundIOUEvaluator: computes the foreground intersection-over-union evaluator score.
    • BoundaryMeanIOUEvaluator: computes the boundary mean intersection-over-union score.
    • BoundaryForegroundIOUEvaluator: computes the boundary foreground intersection-over-union score.
    • L1ErrorEvaluator: computes the L1 error.
  • Image regression:
    • MeanLpErrorEvaluator: computes the mean Lp error (e.g. L1 error for p=1, L2 error for p=2, etc.).
  • Image retrieval:
    • RecallAtKEvaluator(k): computes Recall@k, which is the percentage of relevant items in top-k among all relevant items
    • PrecisionAtKEvaluator(k): computes Precision@k, which is the percentage of TP among all items classified as P in top-k.
    • MeanAveragePrecisionAtK(k): computes Mean Average Precision@k, an information retrieval metric.
    • PrecisionRecallCurveNPointsEvaluator(k): computes a Precision-Recall Curve, interpolated at k points and averaged over all samples.

While different machine learning problems/applications prefer different metrics, below are some general recommendations:

  • Multiclass classification: Top-1 Accuracy and Top-5 Accuracy
  • Multilabel classification: Average Precision, Precision/Recall/Precision@k/threshold, where k and threshold can be very problem-specific
  • Object detection: mAP@IoU=30 and mAP@IoU=50
  • Image caption: Bleu, METEOR, ROUGE-L, CIDEr, SPICE
  • Image matting: Mean IOU, Foreground IOU, Boundary mean IOU, Boundary Foreground IOU, L1 Error
  • Image regression: Mean L1 Error, Mean L2 Error
  • Image retrieval: Recall@k, Precision@k, Mean Average Precision@k, Precision-Recall Curve

Additional Requirements

The image caption evaluators requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with pip install vision-evaluation[caption]. This is not required for other evaluators.