Перейти к файлу
Yuefeng e6245fee4d
Update README.md
2021-03-05 15:11:58 +08:00
.gitignore Official BERT 2021-03-05 01:45:24 +08:00
CODE_OF_CONDUCT.md Initial CODE_OF_CONDUCT.md commit 2021-03-02 22:45:00 -08:00
CONTRIBUTING.md Official BERT 2021-03-05 01:45:24 +08:00
LICENSE Official BERT 2021-03-05 01:45:24 +08:00
README.md Update README.md 2021-03-05 15:11:58 +08:00
SECURITY.md Initial SECURITY.md commit 2021-03-02 22:45:04 -08:00
SUPPORT.md Initial SUPPORT.md commit 2021-03-02 22:45:05 -08:00
__init__.py Official BERT 2021-03-05 01:45:24 +08:00
create_pretraining_data.py Official BERT 2021-03-05 01:45:24 +08:00
extract_features.py Official BERT 2021-03-05 01:45:24 +08:00
gpu_environment.py Release 2021-03-05 01:52:07 +08:00
modeling.py Release 2021-03-05 01:52:07 +08:00
modeling_test.py Official BERT 2021-03-05 01:45:24 +08:00
multilingual.md Official BERT 2021-03-05 01:45:24 +08:00
optimization.py Release 2021-03-05 01:52:07 +08:00
optimization_test.py Official BERT 2021-03-05 01:45:24 +08:00
predicting_movie_reviews_with_bert_on_tf_hub.ipynb Official BERT 2021-03-05 01:45:24 +08:00
requirements.txt Official BERT 2021-03-05 01:45:24 +08:00
run_classifier.py Release 2021-03-05 01:52:07 +08:00
run_classifier_with_tfhub.py Official BERT 2021-03-05 01:45:24 +08:00
run_pretraining.py Release 2021-03-05 01:52:07 +08:00
run_squad.py Official BERT 2021-03-05 01:45:24 +08:00
sample_text.txt Official BERT 2021-03-05 01:45:24 +08:00
tokenization.py Official BERT 2021-03-05 01:45:24 +08:00
tokenization_test.py Official BERT 2021-03-05 01:45:24 +08:00

README.md

DistributedBERT

DistributedBERT is based on the official TensorFlow BERT with following improvements

  • Higher performance: Support distributed training through Horovod with nearly linear acceleration, support mixed-precision training
  • Higher accuracy: Bug fixes and integrated with more advanced techs such as LAMB
  • Easier to use: Customized with more settings
  • More robust: Preemption and failure recovery
  • Easy to leverage: Easy to apply in other BERT-like models such as RoBERTa, ALBERT, ...

Requirements

  • NVIDIA CUDA 10.0+
  • Open MPI 3.1.0+
  • Tensorflow 1.13.1+
  • Horovod 0.16.0+

Example Training Command

export CODE_PATH=/your/path/DistributedBERT
export MODEL_PATH=/your/path/uncased_L-24_H-1024_A-16
export OUTPUT_PATH=/your/path/output
export TRAIN_DATA=/your/path/train
export TEST_DATA=/your/path/test

mpirun -np 4 -H localhost:4 -bind-to none -map-by slot \
    -mca pml ob1 -mca btl ^openib -mca btl_tcp_if_include eth0 \
    -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
    python $CODE_PATH/run_classifier.py \
    --data_dir $TRAIN_DATA \
    --test_data_dir $TEST_DATA \
    --output_dir $OUTPUT_PATH \
    --vocab_file $MODEL_PATH/vocab.txt \
    --bert_config_file $MODEL_PATH/bert_config.json \
    --init_checkpoint $MODEL_PATH/bert_model.ckpt \
    --do_train \
    --do_predict \
    --task_name=qk \
    --label_list=0,1,2,3 \
    --max_seq_length=32 \
    --train_batch_size=64 \
    --num_train_epochs=3 \
    --learning_rate=1e-5 \
    --adjust_lr \
    --xla \
    --reduce_log \
    --keep_checkpoint_max=1 \