Added a model card README.md for my pretrained model. (#5325)

* Create README.md

* Removed unnecessary link from README.md

* Update README.md
This commit is contained in:
Pradhy729 2020-06-29 01:29:14 -07:00 коммит произвёл GitHub
Родитель 7cb52f53ef
Коммит 9209d36f93
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
1 изменённых файлов: 8 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,8 @@
This model is pre-trained on blog articles from AWS Blogs.
## Pre-training corpora
The input text contains around 3000 blog articles on [AWS Blogs website](https://aws.amazon.com/blogs/) technical subject matter including AWS products, tools and tutorials.
## Pre-training details
I picked a Roberta architecture for masked language modeling (6-layer, 768-hidden, 12-heads, 82M parameters) and its corresponding ByteLevelBPE tokenization strategy. I then followed HuggingFace's Transformers [blog post](https://huggingface.co/blog/how-to-train) to train the model.
I chose to follow the following training set-up: 28k training steps with batches of 64 sequences of length 512 with an initial learning rate 5e-5. The model acheived a training loss of 3.6 on the MLM task over 10 epochs.