Update README.md (#7491)
Model now fine-tuned on Transformers 3.1.0, previous out-of-date model was fine-tuned on Transformers 2.3.0.
This commit is contained in:
Родитель
6ef7658c0a
Коммит
f745f61c99
|
@ -1,71 +1,60 @@
|
|||
## Albert xxlarge version 1 language model fine-tuned on SQuAD2.0
|
||||
|
||||
### with the following results:
|
||||
### (updated 30Sept2020) with the following results:
|
||||
|
||||
```
|
||||
exact: 85.65653162637918
|
||||
f1: 89.260458954177
|
||||
exact: 86.11134506864315
|
||||
f1: 89.35371214945009
|
||||
total': 11873
|
||||
HasAns_exact': 82.6417004048583
|
||||
HasAns_f1': 89.8598902096736
|
||||
HasAns_exact': 83.56950067476383
|
||||
HasAns_f1': 90.06353312254078
|
||||
HasAns_total': 5928
|
||||
NoAns_exact': 88.66274179983179
|
||||
NoAns_f1': 88.66274179983179
|
||||
NoAns_exact': 88.64592094196804
|
||||
NoAns_f1': 88.64592094196804
|
||||
NoAns_total': 5945
|
||||
best_exact': 85.65653162637918
|
||||
best_exact': 86.11134506864315
|
||||
best_exact_thresh': 0.0
|
||||
best_f1': 89.2604589541768
|
||||
best_f1': 89.35371214944985
|
||||
best_f1_thresh': 0.0
|
||||
```
|
||||
|
||||
### from script:
|
||||
|
||||
```
|
||||
python -m torch.distributed.launch --nproc_per_node=2 ${RUN_SQUAD_DIR}/run_squad.py \
|
||||
--model_type albert \
|
||||
--model_name_or_path albert-xxlarge-v1 \
|
||||
--do_train \
|
||||
--train_file ${SQUAD_DIR}/train-v2.0.json \
|
||||
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
|
||||
--version_2_with_negative \
|
||||
--num_train_epochs 3 \
|
||||
--max_steps 8144 \
|
||||
--warmup_steps 814 \
|
||||
--do_lower_case \
|
||||
--learning_rate 3e-5 \
|
||||
--max_seq_length 512 \
|
||||
--doc_stride 128 \
|
||||
--save_steps 2000 \
|
||||
--per_gpu_train_batch_size 1 \
|
||||
--gradient_accumulation_steps 24 \
|
||||
--output_dir ${MODEL_PATH}
|
||||
|
||||
CUDA_VISIBLE_DEVICES=0 python ${RUN_SQUAD_DIR}/run_squad.py \
|
||||
--model_type albert \
|
||||
--model_name_or_path ${MODEL_PATH} \
|
||||
--do_eval \
|
||||
--train_file ${SQUAD_DIR}/train-v2.0.json \
|
||||
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
|
||||
--version_2_with_negative \
|
||||
--do_lower_case \
|
||||
--max_seq_length 512 \
|
||||
--per_gpu_eval_batch_size 48 \
|
||||
--output_dir ${MODEL_PATH}
|
||||
python ${EXAMPLES}/run_squad.py \
|
||||
--model_type albert \
|
||||
--model_name_or_path albert-xxlarge-v1 \
|
||||
--do_train \
|
||||
--do_eval \
|
||||
--train_file ${SQUAD}/train-v2.0.json \
|
||||
--predict_file ${SQUAD}/dev-v2.0.json \
|
||||
--version_2_with_negative \
|
||||
--do_lower_case \
|
||||
--num_train_epochs 3 \
|
||||
--max_steps 8144 \
|
||||
--warmup_steps 814 \
|
||||
--learning_rate 3e-5 \
|
||||
--max_seq_length 512 \
|
||||
--doc_stride 128 \
|
||||
--per_gpu_train_batch_size 6 \
|
||||
--gradient_accumulation_steps 8 \
|
||||
--per_gpu_eval_batch_size 48 \
|
||||
--fp16 \
|
||||
--fp16_opt_level O1 \
|
||||
--threads 12 \
|
||||
--logging_steps 50 \
|
||||
--save_steps 3000 \
|
||||
--overwrite_output_dir \
|
||||
--output_dir ${MODEL_PATH}
|
||||
```
|
||||
|
||||
### using the following system & software:
|
||||
### using the following software & system:
|
||||
|
||||
```
|
||||
OS/Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
|
||||
GPU/CPU: 2 x NVIDIA 1080Ti / Intel i7-8700
|
||||
Transformers: 2.3.0
|
||||
PyTorch: 1.4.0
|
||||
TensorFlow: 2.1.0
|
||||
Python: 3.7.6
|
||||
Transformers: 3.1.0
|
||||
PyTorch: 1.6.0
|
||||
TensorFlow: 2.3.1
|
||||
Python: 3.8.1
|
||||
OS: Linux-5.4.0-48-generic-x86_64-with-glibc2.10
|
||||
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
|
||||
```
|
||||
|
||||
### Access this albert_xxlargev1_sqd2_512 fine-tuned model with:
|
||||
|
||||
```python
|
||||
tokenizer = AutoTokenizer.from_pretrained("ahotrod/albert_xxlargev1_squad2_512")
|
||||
model = AutoModelForQuestionAnswering.from_pretrained("ahotrod/albert_xxlargev1_squad2_512")
|
||||
|
|
Загрузка…
Ссылка в новой задаче