marian-training/training-basics
kdavis-mozilla 3fe94ae482 Added --no-same-owner to tar 2020-04-29 06:47:13 +02:00
..
data Add basic example for training 2017-11-12 18:52:44 +00:00
scripts Added --no-same-owner to tar 2020-04-29 06:47:13 +02:00
.gitignore Add basic example for training 2017-11-12 18:52:44 +00:00
README.md Update README.md 2017-11-26 16:44:42 +01:00
clean.sh Add basic example for training 2017-11-12 18:52:44 +00:00
run-me.sh Saved models to keep directory 2020-04-28 19:59:02 +02:00

README.md

Example for training with Marian

Files and scripts in this folder have been adapted from the Romanian-English sample from https://github.com/rsennrich/wmt16-scripts. We also add the back-translated data from http://data.statmt.org/rsennrich/wmt16_backtranslations/ as desribed in http://www.aclweb.org/anthology/W16-2323. The resulting system should be competitive or even slightly better than reported in the Edinburgh WMT2016 paper.

To execute the complete example type:

./run-me.sh

which downloads the Romanian-English training files and preprocesses them (tokenization, truecasing, segmentation into subwords units).

To use with a different GPU than device 0 or more GPUs (here 0 1 2 3) type the command below. Training time on 1 NVIDIA GTX 1080 GPU should be roughly 24 hours.

./run-me.sh 0 1 2 3

Next it executes a training run with marian:

../../build/marian \
    --devices $GPUS \
    --type amun \
    --model model/model.npz \
    --train-sets data/corpus.bpe.ro data/corpus.bpe.en \
    --vocabs model/vocab.ro.yml model/vocab.en.yml \
    --dim-vocabs 66000 50000 \
    --mini-batch-fit -w 3000 \
    --layer-normalization --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 \
    --early-stopping 5 \
    --valid-freq 10000 --save-freq 10000 --disp-freq 1000 \
    --valid-metrics cross-entropy translation \
    --valid-sets data/newsdev2016.bpe.ro data/newsdev2016.bpe.en \
    --valid-script-path ./scripts/validate.sh \
    --log model/train.log --valid-log model/valid.log \
    --overwrite --keep-best \
    --seed 1111 --exponential-smoothing \
    --normalize=1 --beam-size 12 --quiet-translation

After training (the training should stop if cross-entropy on the validation set stops improving) the model with the highest translation validation score is used to translate the WMT2016 dev set and test set with marian-decoder:

cat data/newsdev2016.bpe.ro \
    | ../../build/marian-decoder -c  model/model.npz.best-translation.npz.decoder.yml -d $GPUS \
      -b 12 -n1 --mini-batch 64 --maxi-batch 10 --maxi-batch-sort src -w 2500 \
    | sed 's/\@\@ //g' \
    | ../tools/moses-scripts/scripts/recaser/detruecase.perl \
    | ../tools/moses-scripts/scripts/tokenizer/detokenizer.perl -l en \
    > data/newsdev2016.ro.output

after which BLEU scores for the dev and test set are reported. Results should be somewhere in the area of:

newsdev2016:
BLEU = 35.88, 67.4/42.3/28.8/20.2 (BP=1.000, ratio=1.012, hyp_len=51085, ref_len=50483)

newstest2016:
BLEU = 34.53, 66.0/40.7/27.5/19.2 (BP=1.000, ratio=1.015, hyp_len=49258, ref_len=48531)

Custom validation script

The validation script scripts/validate.sh is a quick example how to write a custom validation script. The training pauses until the validation script finishes executing. A validation script should not output anything to stdout apart from the final single score (last line):

#!/bin/bash

cat $1 \
    | sed 's/\@\@ //g' \
    | ../tools/moses-scripts/scripts/recaser/detruecase.perl \
    | ../tools/moses-scripts/scripts/tokenizer/detokenize.perl -l en \
    | ../tools/moses-scripts/scripts/generic/multi-bleu-detok.perl data/newsdev2016.en \
    | sed -r 's/BLEU = ([0-9.]+),.*/\1/'