huggingface-transformers/model_cards/cooelf/limitbert
Julien Chaumond ba654270b3 [model_cards] rename to correct model name 2020-10-14 19:02:48 +02:00
..
README.md [model_cards] rename to correct model name 2020-10-14 19:02:48 +02:00

README.md

LIMIT-BERT

Code and model for the EMNLP 2020 Findings paper:

LIMIT-BERT: Linguistic Informed Multi-task BERT)

Contents

  1. Requirements
  2. Training

Requirements

  • Python 3.6 or higher.
  • Cython 0.25.2 or any compatible version.
  • PyTorch 1.0.0+.
  • EVALB. Before starting, run make inside the EVALB/ directory to compile an evalb executable. This will be called from Python for evaluation.
  • pytorch-transformers PyTorch 1.0.0+ or any compatible version.

Pre-trained Models (PyTorch)

The following pre-trained models are available for download from Google Drive:

How to use

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("cooelf/limitbert")
model = AutoModel.from_pretrained("cooelf/limitbert")

Please see our original repo for the training scripts.

https://github.com/cooelf/LIMIT-BERT

Training

To train LIMIT-BERT, simply run:

sh run_limitbert.sh

Evaluation Instructions

To test after setting model path:

sh test_bert.sh

Citation

@article{zhou2019limit,
  title={{LIMIT-BERT}: Linguistic informed multi-task {BERT}},
  author={Zhou, Junru and Zhang, Zhuosheng and Zhao, Hai},
  journal={arXiv preprint arXiv:1910.14296},
  year={2019}
}