huggingface-transformers/notebooks
Jessica Yung 143b564e59
Add pip install update to resolve import error in transformers notebook (#8616)
* Add pip install update to resolve import error

Add pip install upgrade tensorflow-gpu to remove error below:
```
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-094fadb93f3f> in <module>()
      1 import torch
----> 2 from transformers import AutoModel, AutoTokenizer, BertTokenizer
      3 
      4 torch.set_grad_enabled(False)

4 frames
/usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>()
    133 
    134 # Pipelines
--> 135 from .pipelines import (
    136     Conversation,
    137     ConversationalPipeline,

/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <module>()
     46     import tensorflow as tf
     47 
---> 48     from .modeling_tf_auto import (
     49         TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING,
     50         TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING,

/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in <module>()
     49 from .configuration_utils import PretrainedConfig
     50 from .file_utils import add_start_docstrings
---> 51 from .modeling_tf_albert import (
     52     TFAlbertForMaskedLM,
     53     TFAlbertForMultipleChoice,

/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_albert.py in <module>()
     22 import tensorflow as tf
     23 
---> 24 from .activations_tf import get_tf_activation
     25 from .configuration_albert import AlbertConfig
     26 from .file_utils import (

/usr/local/lib/python3.6/dist-packages/transformers/activations_tf.py in <module>()
     52     "gelu": tf.keras.layers.Activation(gelu),
     53     "relu": tf.keras.activations.relu,
---> 54     "swish": tf.keras.activations.swish,
     55     "silu": tf.keras.activations.swish,
     56     "gelu_new": tf.keras.layers.Activation(gelu_new),

AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'
```
I have tried running the colab after this change and it seems to work fine (all the cells run with no errors).

* Update notebooks/02-transformers.ipynb

only need to upgrade tensorflow, not tensorflow-gpu.

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
2020-11-23 09:58:52 -05:00
..
01-training-tokenizers.ipynb Update tokenizers to 0.7.0-rc5 (#3705) 2020-04-10 14:23:49 -04:00
02-transformers.ipynb Add pip install update to resolve import error in transformers notebook (#8616) 2020-11-23 09:58:52 -05:00
03-pipelines.ipynb Fixed open in colab link (#6825) 2020-08-30 18:21:00 +08:00
04-onnx-export.ipynb Update ONNX notebook to include section on quantization. (#6831) 2020-08-31 21:28:00 +02:00
05-benchmark.ipynb [bart] fix config.classif_dropout (#7593) 2020-10-06 11:33:51 -04:00
README.md add new notebooks (#8246) 2020-11-02 20:21:55 +01:00

README.md

🤗 Transformers Notebooks

You can find here a list of the official notebooks provided by Hugging Face.

Also, we would like to list here interesting content created by the community. If you wrote some notebook(s) leveraging 🤗 Transformers and would like be listed here, please open a Pull Request so it can be included under the Community notebooks.

Hugging Face's notebooks 🤗

Notebook Description
Getting Started Tokenizers How to train and use your very own tokenizer Open In Colab
Getting Started Transformers How to easily start using transformers Open In Colab
How to use Pipelines Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers Open In Colab
How to train a language model Highlight all the steps to effectively train Transformer model on custom data Open in Colab
How to generate text How to use different decoding methods for language generation with transformers Open in Colab
How to export model to ONNX Highlight how to export and run inference workloads through ONNX
How to use Benchmarks How to benchmark models with transformers Open in Colab
Reformer How Reformer pushes the limits of language modeling Open in Colab

Community notebooks:

Notebook Description Author
Train T5 in Tensorflow 2 How to train T5 for any task using Tensorflow 2. This notebook demonstrates a Question & Answer task implemented in Tensorflow 2 using SQUAD Muhammad Harris Open In Colab
Train T5 on TPU How to train T5 on SQUAD with Transformers and Nlp Suraj Patil Open In Colab
Fine-tune T5 for Classification and Multiple Choice How to fine-tune T5 for classification and multiple choice tasks using a text-to-text format with PyTorch Lightning Suraj Patil Open In Colab
Fine-tune DialoGPT on New Datasets and Languages How to fine-tune the DialoGPT model on a new dataset for open-dialog conversational chatbots Nathan Cooper Open In Colab
Long Sequence Modeling with Reformer How to train on sequences as long as 500,000 tokens with Reformer Patrick von Platen Open In Colab
Fine-tune BART for Summarization How to fine-tune BART for summarization with fastai using blurr Wayde Gilliam Open In Colab
Fine-tune a pre-trained Transformer on anyone's tweets How to generate tweets in the style of your favorite Twitter account by fine-tune a GPT-2 model Boris Dayma Open In Colab
A Step by Step Guide to Tracking Hugging Face Model Performance A quick tutorial for training NLP models with HuggingFace and & visualizing their performance with Weights & Biases Jack Morris Open In Colab
Pretrain Longformer How to build a "long" version of existing pretrained models Iz Beltagy Open In Colab
Fine-tune Longformer for QA How to fine-tune longformer model for QA task Suraj Patil Open In Colab
Evaluate Model with 🤗nlp How to evaluate longformer on TriviaQA with nlp Patrick von Platen Open In Colab
Fine-tune T5 for Sentiment Span Extraction How to fine-tune T5 for sentiment span extraction using a text-to-text format with PyTorch Lightning Lorenzo Ampil Open In Colab
Fine-tune DistilBert for Multiclass Classification How to fine-tune DistilBert for multiclass classification with PyTorch Abhishek Kumar Mishra Open In Colab
Fine-tune BERT for Multi-label Classification How to fine-tune BERT for multi-label classification using PyTorch Abhishek Kumar Mishra Open In Colab
Fine-tune T5 for Summarization How to fine-tune T5 for summarization in PyTorch and track experiments with WandB Abhishek Kumar Mishra Open In Colab
Speed up Fine-Tuning in Transformers with Dynamic Padding / Bucketing How to speed up fine-tuning by a factor of 2 using dynamic padding / bucketing Michael Benesty Open In Colab
Pretrain Reformer for Masked Language Modeling How to train a Reformer model with bi-directional self-attention layers Patrick von Platen Open In Colab
Expand and Fine Tune Sci-BERT How to increase vocabulary of a pretrained SciBERT model from AllenAI on the CORD dataset and pipeline it. Tanmay Thakur Open In Colab
Fine-tune Electra and interpret with Integrated Gradients How to fine-tune Electra for sentiment analysis and interpret predictions with Captum Integrated Gradients Eliza Szczechla Open In Colab
fine-tune a non-English GPT-2 Model with Trainer class How to fine-tune a non-English GPT-2 Model with Trainer class Philipp Schmid Open In Colab
Fine-tune a DistilBERT Model for Multi Label Classification task How to fine-tune a DistilBERT Model for Multi Label Classification task Dhaval Taunk Open In Colab
Fine-tune ALBERT for sentence-pair classification How to fine-tune an ALBERT model or another BERT-based model for the sentence-pair classification task Nadir El Manouzi Open In Colab
Fine-tune Roberta for sentiment analysis How to fine-tune an Roberta model for sentiment analysis Dhaval Taunk Open In Colab
Evaluating Question Generation Models How accurate are the answers to questions generated by your seq2seq transformer model? Pascal Zoleko Open In Colab
Classify text with DistilBERT and Tensorflow How to fine-tune DistilBERT for text classification in TensorFlow Peter Bayerle Open In Colab
Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail How to warm-start a EncoderDecoderModel with a bert-base-uncased checkpoint for summarization on CNN/Dailymail Patrick von Platen Open In Colab
Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum How to warm-start a shared EncoderDecoderModel with a roberta-base checkpoint for summarization on BBC/XSum Patrick von Platen Open In Colab