2020-09-03 17:48:02 +03:00
< img src = "demo/logo.png" width = "500" align = "center" > < br >
2020-01-24 21:07:28 +03:00
2021-09-07 16:49:31 +03:00
# Verseagility - NLP Toolkit
2021-09-09 20:26:00 +03:00
Verseagility is a Python-based toolkit to ramp up your custom natural language processing (NLP) task, allowing you to bring your own data and bring models into production. It is a central component of the Microsoft Data Science Toolkit.
## Why Verseagility?
Building NLP solutions which cover all components from text classification, named entity recognition to answer suggestion, require testing and integration effort. For this purpose, we developed this toolkit, which serves to minimize the setup time of an end-to-end solution and maximizes the time for use case-specific enhancements and adjustments. On this way, first results should be made available in an accelerated way when bringing individual, pre-labeled text document data and allow more time for iterative improvements.
2020-01-24 21:07:28 +03:00
2021-09-07 17:28:43 +03:00
See the [documentation section ](./docs/README.md ) for detailed instructions how to get started with the toolkit.
2020-04-03 00:10:13 +03:00
2020-08-24 17:24:50 +03:00
## Supported Use Cases
2021-09-09 20:26:00 +03:00
Verseagility is a modular toolkit that can be extended by further use-cases as needed. Following use-cases are already implemented and ready to be used:
2020-02-18 16:43:58 +03:00
- Binary, multi-class & multi-label classification
2020-01-24 21:07:28 +03:00
- Named entity recognition
- Question answering
## Live Demo
2021-09-09 20:26:00 +03:00
The toolkit paves the way to build consumeable REST APIs, for example in Azure Container Instances. These APIs may be used by the application of your choice: a website, a business process or just for testing purposes.
A web-based live demo of models resulting from Verseagility is hosted at the Microsoft Technology Center Germany (MTC):
> [Verseagility Demo](https://verseagility.azurewebsites.net)
2020-01-24 21:07:28 +03:00
2021-09-09 20:26:00 +03:00
## Repository Structure
The repository is built in the following structure:
2020-08-24 17:24:50 +03:00
2020-08-30 21:16:19 +03:00
├── /assets < - Version controlled assets , such as stopword lists . Max size
2020-08-24 17:24:50 +03:00
│ per file: 10 MB. Training data should
│ be stored in local data directory, outside of repository or within gitignore.
│
2020-08-30 21:16:19 +03:00
├── /demo < - Demo environment that can be deployed as is , or customized .
│
├── /deploy < - Scripts used for deploying training or test service
│ ├── training.py < - Deploy your training to a remote compute instance , via AML
│ │
│ ├── hyperdrive.py < - Deploy hyperparemeter sweep on a remote compute instance , via AML
│ │
│ └── service.py < - Deploy a service ( endpoint ) to ACI or AKS , via AML
│
├── /docs < - Detailed documentation .
2020-08-24 17:24:50 +03:00
│
2020-08-30 21:16:19 +03:00
├── /notebook < - Jupyter notebooks . Naming convention is < [ Task ] - [ Short Description ] > ,
2020-08-24 17:24:50 +03:00
│ for example: 'Data - Exploration.ipynb'
│
2020-08-30 21:16:19 +03:00
├── /pipeline < - Document processing pipeline components , including document cracker .
2020-08-24 17:24:50 +03:00
│
2020-08-30 21:16:19 +03:00
├── /project < - Project configuration files , detailing the tasks to be completed .
2020-08-24 17:24:50 +03:00
│
2020-08-30 21:16:19 +03:00
├── /scraper < - Website scraper used to fetch sample data .
2020-08-24 17:24:50 +03:00
│ Can be reused for similarly structured forum websites.
│
2020-08-30 21:16:19 +03:00
├── /src < - Source code for use in this project .
2020-08-24 17:24:50 +03:00
│ ├── infer.py < - Inference file , for scoring the model
│ │
│ ├── data.py < - Use case agnostic utils file , for data management incl upload / download
│ │
│ └── helper.py < - Use case agnostic utils file , with common functions incl secret handling
│
2020-08-30 21:16:19 +03:00
├── /tests < - Unit tests ( using pytest )
2020-08-24 17:24:50 +03:00
│
2020-08-30 21:16:19 +03:00
├── README.md < - The top-level README for developers using this project .
2020-08-24 17:24:50 +03:00
│
├── requirements.txt < - The requirements file for reproducing the analysis environment .
│ Can be generated using `pip freeze > requirements.txt`
│
└── config.ini < - Configuration and secrets used while developing locally
Secrets in production should be stored in the Azure KeyVault
--------
## Acknowledgements
2021-09-07 16:49:31 +03:00
Verseagility is built in part using the following frameworks:
2021-09-09 19:51:12 +03:00
- [PyTorch ](https://pytorch.org/ )
2020-08-24 17:24:50 +03:00
- [Transformers ](https://github.com/huggingface/pytorch-transformers ) by HuggingFace
- [FARM ](https://github.com/deepset-ai/FARM/ ) by deepset ai
- [spaCy ](https://github.com/explosion/spaCy/ ) by Explosion ai
2021-09-09 19:51:12 +03:00
- [flairNLP ](https://github.com/flairNLP/flair/ ) by Humboldt-University of Berlin
2020-08-24 17:24:50 +03:00
- [gensim ](https://radimrehurek.com/gensim/ )
2021-09-09 20:26:00 +03:00
## Maintainers:
2020-08-24 17:24:50 +03:00
- [Timm Walz ](mailto:timm.walz@microsoft.com )
- [Christian Vorhemus ](mailto:christian.vorhemus@microsoft.com )
2021-09-07 16:49:31 +03:00
- [Martin Kayser ](https://www.linkedin.com/in/mkayser/ )
2020-05-21 09:45:03 +03:00
2021-09-07 16:49:31 +03:00
## Current updates
2020-08-24 17:24:50 +03:00
The following section contains a list of possible new features or enhancements. Feel free to contribute.
2020-01-24 21:07:28 +03:00
### Classification
2020-04-21 00:51:15 +03:00
- [ ] integrate handling for larger documents vs short documents
2020-05-21 09:45:03 +03:00
- [ ] integrate explicit handling for unbalanced datasets
- [ ] ONNX support
2020-01-24 21:07:28 +03:00
### NER
2020-05-11 01:48:55 +03:00
- [ ] improve duplicate handling
2020-05-21 09:45:03 +03:00
### Question Answering
2020-08-24 17:24:50 +03:00
- [ ] apply advanced IR methods
### Summarization
- [ ] ** (IP)** full test of integration
2020-01-24 21:07:28 +03:00
### Deployment
2020-08-24 17:24:50 +03:00
- [ ] deploy service to Azure Function (without AzureML)
- [ ] setup GitHub actions
### Notebooks Templates
2020-01-30 05:00:11 +03:00
- [ ] ** (IP)** review model results (auto generate after each training step)
2020-01-24 21:07:28 +03:00
- [ ] review model bias (auto generate after each training step)
2020-05-21 09:45:03 +03:00
- [ ] ** (IP)** available models benchmark (incl AutoML)
2020-01-24 21:07:28 +03:00
### Tests
2020-04-03 00:10:13 +03:00
- [ ] unit tests (pytest)
2020-05-21 09:45:03 +03:00
2020-05-12 13:06:55 +03:00
## Contributing
2020-01-24 16:53:49 +03:00
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct ](https://opensource.microsoft.com/codeofconduct/ ).
For more information see the [Code of Conduct FAQ ](https://opensource.microsoft.com/codeofconduct/faq/ ) or
2021-03-11 23:57:36 +03:00
contact [opencode@microsoft.com ](mailto:opencode@microsoft.com ) with any additional questions or comments.