nlp-recipes/utils_nlp/eval
saidbleik 82816318dc update summarization files 2020-01-16 19:14:24 +00:00
..
SentEval Revert "Merge with staging and resolve conflicts." 2019-10-21 17:46:01 +00:00
rouge Some code clean up. 2020-01-08 17:59:10 -05:00
README.md Update title hyper links. 2019-08-19 17:58:13 -04:00
__init__.py Import rouge functions in eval module init script. 2020-01-06 19:57:03 +00:00
classification.py Moved Confusion Matrix function to utils_nlp/eval and addressed PR comments 2019-09-04 17:43:18 -04:00
evaluate_squad.py Additional changes to utils_nlp docs. 2019-08-19 17:39:07 -04:00
evaluate_summarization.py update summarization files 2020-01-16 19:14:24 +00:00
question_answering.py More tests for list of ground truth answers. 2019-10-21 16:04:30 +00:00
senteval.py Minor edits. 2019-08-20 10:56:11 -04:00

README.md

Evaluation

The evaluation (eval) submodule includes functionalities for computing metrics for evaluating NLP model performance. There are general evaluation metrics like accuracy, precision, recall, and f1 scores for classification scenarios. In addition, we also include evaluation utilities for specialized tasks like question answering and sentence embedding.