Fast Neural Machine Translation in C++
Перейти к файлу
Hieu Hoang d4ac798b87 comment out timing debugging 2017-06-30 11:01:02 +01:00
cmake Towards YAML configurations 2016-04-28 20:00:43 +02:00
contrib cudaMemset -> cudaMemsetAsync.2 2017-06-28 20:55:56 +01:00
examples Merge ../marian.master into 3d6 2017-06-21 20:11:21 +01:00
git-hooks Made post-rewrite hook executable in git-hooks. 2017-03-21 18:19:25 +00:00
scripts fix path in python server 2017-06-20 14:59:34 +00:00
src comment out timing debugging 2017-06-30 11:01:02 +01:00
tests Add simple python tests 2016-10-11 16:14:59 +00:00
.gitignore Add build_cpu directory to git ignore 2017-02-07 10:47:08 +00:00
.gitmodules change submodule path 2017-06-03 15:21:13 +02:00
CMakeLists.txt don't need OpenCL 2017-06-21 20:35:22 +01:00
LICENSE Update LICENSE 2017-04-06 22:37:22 +02:00
README.md add documentation for macOS and for making python bindings 2017-06-27 11:32:30 +01:00

README.md

Marian

Join the chat at https://gitter.im/amunmt/amunmt

CUDABuild Status CPU Build Status

Marian (formerly known as AmuNMT) is an efficient Neural Machine Translation framework written in pure C++ with minimal dependencies. It has mainly been developed at the Adam Mickiewicz University in Poznań (AMU) and at the University of Edinburgh.

It is currently being deployed in multiple European projects and is the main translation and training engine behind the neural MT launch at the World Intellectual Property Organization.

Main features:

  • Fast multi-gpu training and translation
  • Compatible with Nematus and DL4MT
  • Efficient pure C++ implementation
  • Permissive open source license (MIT)
  • more details...

If you use this, please cite:

Marcin Junczys-Dowmunt, Tomasz Dwojak, Hieu Hoang (2016). Is Neural Machine Translation Ready for Deployment? A Case Study on 30 Translation Directions (https://arxiv.org/abs/1610.01108)

@InProceedings{junczys2016neural,
  title     = {Is Neural Machine Translation Ready for Deployment? A Case Study
               on 30 Translation Directions},
  author    = {Junczys-Dowmunt, Marcin and Dwojak, Tomasz and Hoang, Hieu},
  booktitle = {Proceedings of the 9th International Workshop on Spoken Language
  Translation (IWSLT)},
  year      = {2016},
  address   = {Seattle, WA},
  url       = {http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_4.pdf}
}

Website:

More information on https://marian-nmt.github.io

GPU version

Ubuntu 16.04 LTS (tested and recommended). For Ubuntu 16.04 the standard packages should work. On newer versions of Ubuntu, e.g. 16.10, there may be problems due to incompatibilities of the default g++ compiler and CUDA.

  • CMake 3.5.1 (default)
  • GCC/G++ 5.4 (default)
  • Boost 1.58 (default)
  • CUDA 8.0

Ubuntu 14.04 LTS (tested). A newer CMake version than the default version is required and can be installed from source.

  • CMake 3.5.1 (due to CUDA related bugs in earlier versions)
  • GCC/G++ 4.9
  • Boost 1.54
  • CUDA 7.5

CPU version

The CPU-only version will automatically be compiled if CUDA cannot be detected by CMake. Only the translator will be compiled, the training framework is strictily GPU-based.

Tested on different machines and distributions:

  • CMake 3.5.1
  • The CPU version should be a lot more forgiving concerning GCC/G++ or Boost versions.

macOS

To be able to make the CPU version on macOS, first install brew and then run:

brew install cmake boost boost-python

Then, proceed to the next section.

Download and Compilation

Clone a fresh copy from github:

git clone https://github.com/amunmt/amunmt

The project is a standard CMake out-of-source build:

cd amunmt
mkdir build
cd build
cmake ..
make -j

If run for the first time, this will also download Marian -- the training framework for Marian.

Compile Python bindings

In order to compile the Python library, after running make as in the previous section, do:

make python

This will generate a libamunmt.dylib or libamunmt.so in your build/src/ directory, which can be imported from Python.

Running Marian

Training

Assuming corpus.en and corpus.ro are corresponding and preprocessed files of a English-Romanian parallel corpus, the following command will create a Nematus-compatible neural machine translation model.

./marian/build/marian \
  --train-sets corpus.en corpus.ro \
  --vocabs vocab.en vocab.ro \
  --model model.npz

See the documentation for a full list of command line options or the examples for a full example of how to train a WMT-grade model.

Translating

If a trained model is available, run:

./marian/build/amun -m model.npz -s vocab.en -t vocab.ro <<< "This is a test ."

See the documentation for a full list of command line options or the examples for a full example of how to use Edinburgh's WMT models for translation.

Example usage

Acknowledgements

The development of Marian received funding from the European Union's Horizon 2020 Research and Innovation Programme under grant agreements 688139 (SUMMA; 2016-2019) and 645487 (Modern MT; 2015-2017), the Amazon Academic Research Awards program, and the World Intellectual Property Organization.