huggingface-transformers/examples/deebert
Julien Chaumond 042a6aa777
Tokenizers: ability to load from model subfolder (#8586)
* <small>tiny typo</small>

* Tokenizers: ability to load from model subfolder

* use subfolder for local files as well

* Uniformize model shortcut name => model id

* from s3 => from huggingface.co

Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com>
2020-11-17 08:58:45 -05:00
..
src Reorganize repo (#8580) 2020-11-16 21:43:42 -05:00
README.md Add DeeBERT (entropy-based early exiting for *BERT) (#5477) 2020-07-08 08:17:59 +08:00
entropy_eval.sh Add DeeBERT (entropy-based early exiting for *BERT) (#5477) 2020-07-08 08:17:59 +08:00
eval_deebert.sh Add DeeBERT (entropy-based early exiting for *BERT) (#5477) 2020-07-08 08:17:59 +08:00
run_glue_deebert.py Tokenizers: ability to load from model subfolder (#8586) 2020-11-17 08:58:45 -05:00
test_glue_deebert.py using multi_gpu consistently (#8446) 2020-11-10 13:23:58 -05:00
train_deebert.sh Add DeeBERT (entropy-based early exiting for *BERT) (#5477) 2020-07-08 08:17:59 +08:00

README.md

DeeBERT: Early Exiting for *BERT

This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference, modified from its original code base.

The original code base also has information for downloading sample models that we have trained in advance.

Usage

There are three scripts in the folder which can be run directly.

In each script, there are several things to modify before running:

  • PATH_TO_DATA: path to the GLUE dataset.
  • --output_dir: path for saving fine-tuned models. Default: ./saved_models.
  • --plot_data_dir: path for saving evaluation results. Default: ./results. Results are printed to stdout and also saved to npy files in this directory to facilitate plotting figures and further analyses.
  • MODEL_TYPE: bert or roberta
  • MODEL_SIZE: base or large
  • DATASET: SST-2, MRPC, RTE, QNLI, QQP, or MNLI

train_deebert.sh

This is for fine-tuning DeeBERT models.

eval_deebert.sh

This is for evaluating each exit layer for fine-tuned DeeBERT models.

entropy_eval.sh

This is for evaluating fine-tuned DeeBERT models, given a number of different early exit entropy thresholds.

Citation

Please cite our paper if you find the resource useful:

@inproceedings{xin-etal-2020-deebert,
    title = "{D}ee{BERT}: Dynamic Early Exiting for Accelerating {BERT} Inference",
    author = "Xin, Ji  and
      Tang, Raphael  and
      Lee, Jaejun  and
      Yu, Yaoliang  and
      Lin, Jimmy",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-main.204",
    pages = "2246--2251",
}