953d8d4167
use rapidfuzz instead of fuzzywuzzy |
||
---|---|---|
transformer-xh | ||
.gitignore | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
Evaluator.py | ||
LICENSE | ||
LICENSE.txt | ||
README.md | ||
SECURITY.md | ||
Trainer.py | ||
download.sh | ||
fever_eval.sh | ||
fever_train.sh | ||
hotpot_eval.sh | ||
hotpot_train.sh | ||
main.py | ||
setup.py | ||
utils.py |
README.md
Microsoft Open Source Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct.
Resources:
- Microsoft Open Source Code of Conduct
- Microsoft Code of Conduct FAQ
- Contact opencode@microsoft.com with questions or concerns
Transformer-XH
The source codes of the paper "Transformer-XH: Multi-evidence Reasoning with Extra Hop Attention (ICLR 2020)".
Dependency Installation
First, Run python setup.py develop to install required dependencies for transformer-xh. Also install apex (for distributed training) following official documentation here.
Data and trained model Download
You can run bash script download.sh
For Hotpot QA, we provide processed graph (Transformer-XH) input here, after downloading, unzip it and put into ./data folder We also provide trained model here, unzip the downloaded model and put into ./experiments folder
Similarly, we provide processed graph in fever here, and trained model here.
Run Your Models
Use hotpot_train.sh for training on hotpot QA task, hotpot_eval.sh for evaluation (default fp16 training).
Similarly, fever_train.sh for training on FEVER task, fever_eval.sh for evaluation (default fp16 training).
Contact
If you have questions, suggestions and bug reports, please email chenz@cs.umd.edu and/or Chenyan.Xiong@microsoft.com.