447808c85f
* Add new SQUAD example * Same with a task-specific Trainer * Address review comment. * Small fixes * Initial work for XLNet * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Final clean up and working XLNet script * Test and debug * Final working version * Add new SQUAD example * Same with a task-specific Trainer * Address review comment. * Small fixes * Initial work for XLNet * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Final clean up and working XLNet script * Test and debug * Final working version * Add tick * Update README * Address review comments Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> |
||
---|---|---|
.. | ||
adversarial | ||
benchmarking | ||
bert-loses-patience | ||
bertology | ||
contrib | ||
deebert | ||
distillation | ||
language-modeling | ||
longform-qa | ||
lxmert | ||
movement-pruning | ||
multiple-choice | ||
question-answering | ||
rag | ||
seq2seq | ||
text-classification | ||
text-generation | ||
token-classification | ||
README.md | ||
conftest.py | ||
lightning_base.py | ||
requirements.txt | ||
test_examples.py | ||
test_xla_examples.py | ||
xla_spawn.py |
README.md
Examples
Version 2.9 of 🤗 Transformers introduced a new Trainer
class for PyTorch, and its equivalent TFTrainer
for TF 2.
Running the examples requires PyTorch 1.3.1+ or TensorFlow 2.2+.
Here is the list of all our examples:
- grouped by task (all official examples work for multiple models)
- with information on whether they are built on top of
Trainer
/TFTrainer
(if not, they still work, they might just lack some features), - whether or not they leverage the 🤗 Datasets library.
- links to Colab notebooks to walk through the scripts and run them easily,
- links to Cloud deployments to be able to deploy large-scale trainings in the Cloud with little to no setup.
Important note
Important
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. Execute the following steps in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r ./examples/requirements.txt
Alternatively, you can run the version of the examples as they were for your current version of Transformers via (for instance with v3.4.0):
git checkout tags/v3.4.0
The Big Table of Tasks
Task | Example datasets | Trainer support | TFTrainer support | 🤗 Datasets | Colab |
---|---|---|---|---|---|
language-modeling |
Raw text | ✅ | - | ✅ | |
text-classification |
GLUE, XNLI | ✅ | ✅ | ✅ | |
token-classification |
CoNLL NER | ✅ | ✅ | ✅ | - |
multiple-choice |
SWAG, RACE, ARC | ✅ | ✅ | - | |
question-answering |
SQuAD | ✅ | ✅ | ✅ | - |
text-generation |
- | n/a | n/a | - | |
distillation |
All | - | - | - | - |
summarization |
CNN/Daily Mail | ✅ | - | - | - |
translation |
WMT | ✅ | - | - | - |
bertology |
- | - | - | - | - |
adversarial |
HANS | ✅ | - | - | - |
One-click Deploy to Cloud (wip)
Coming soon!
Running on TPUs
When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy
.
When using PyTorch, we support TPUs thanks to pytorch/xla
. For more context and information on how to setup your TPU environment refer to Google's documentation and to the
very detailed pytorch/xla README.
In this repo, we provide a very simple launcher script named xla_spawn.py that lets you run our example scripts on multiple TPU cores without any boilerplate.
Just pass a --num_cores
flag to this script, then your regular training script with its arguments (this is similar to the torch.distributed.launch
helper for torch.distributed).
Note that this approach does not work for examples that use pytorch-lightning
.
For example for run_glue
:
python examples/xla_spawn.py --num_cores 8 \
examples/text-classification/run_glue.py \
--model_name_or_path bert-base-cased \
--task_name mnli \
--data_dir ./data/glue_data/MNLI \
--output_dir ./models/tpu \
--overwrite_output_dir \
--do_train \
--do_eval \
--num_train_epochs 1 \
--save_steps 20000
Feedback and more use cases and benchmarks involving TPUs are welcome, please share with the community.
Logging & Experiment tracking
You can easily log and monitor your runs code. The following are currently supported:
Weights & Biases
To use Weights & Biases, install the wandb package with:
pip install wandb
Then log in the command line:
wandb login
If you are in Jupyter or Colab, you should login with:
import wandb
wandb.login()
Whenever you use Trainer
or TFTrainer
classes, your losses, evaluation metrics, model topology and gradients (for Trainer
only) will automatically be logged.
When using 🤗 Transformers with PyTorch Lightning, runs can be tracked through WandbLogger
. Refer to related documentation & examples.
Comet.ml
To use comet_ml
, install the Python package with:
pip install comet_ml
or if in a Conda environment:
conda install -c comet_ml -c anaconda -c conda-forge comet_ml