* Added links to more community notebooks
Added links to 3 more community notebooks from the git repo: https://github.com/abhimishra91/transformers-tutorials
Different Transformers models are fine tuned on Dataset using PyTorch
* Update README.md
* Update README.md
* Update README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [hf_api] Attach all unknown attributes for future-proof compatibility
* [Pipeline] NerPipeline is really a TokenClassificationPipeline
* modelcard.py: I don't think we need to force the download
* Remove config, tokenizer from SUPPORTED_TASKS as we're moving to one model = one weight + one tokenizer
* FillMaskPipeline: also output token in string form
* TextClassificationPipeline: option to return all scores, not just the argmax
* Update docs/source/main_classes/pipelines.rst
* Glue task cleaup
* Enable writing cache to cache_dir in case dataset lives in readOnly
filesystem.
* Differentiate match vs mismatch for MNLI metrics.
* Style
* Fix pytype
* Fix type
* Use cache_dir in mnli mismatch eval dataset
* Small Tweaks
Co-authored-by: Julien Chaumond <chaumond@gmail.com>
* Kill model archive maps
* Fixup
* Also kill model_archive_map for MaskedBertPreTrainedModel
* Unhook config_archive_map
* Tokenizers: align with model id changes
* make style && make quality
* Fix CI
* pass on tokenizer to pipeline
* order input names when convert to onnx
* update style
* remove unused imports
* make ordered inputs list needs to be mutable
* add test custom bert model
* remove unused imports
* better api
* improve automatic setting of global attention mask
* fix longformer bug
* fix global attention mask in test
* fix global attn mask flatten
* fix slow tests
* update docstring
* update docs and make more robust
* improve attention mask
* add multiple choice for longformer
* add models to docs
* adapt docstring
* add test to longformer
* add longformer for mc in init and modeling auto
* fix tests