Add paper link for UnisSpeech-SAT

This commit is contained in:
Sanyuan Chen (陈三元) 2021-10-13 12:53:29 +08:00 коммит произвёл GitHub
Родитель 2ed5b61168
Коммит 2247223e02
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
1 изменённых файлов: 2 добавлений и 2 удалений

Просмотреть файл

@ -1,6 +1,6 @@
# UniSpeech-SAT
This is the official implementation of paper "UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training"(```ICASSP 2022 Submission```). The implementation mainly based on [fairseq](https://github.com/pytorch/fairseq) codebase.
This is the official implementation of paper "[UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training](https://arxiv.org/abs/2110.05752)"(```ICASSP 2022 Submission```). The implementation mainly based on [fairseq](https://github.com/pytorch/fairseq) codebase.
## Requirements and Installation
@ -42,4 +42,4 @@ f = model.feature_extractor(wav_input_16khz)
![alt text](SUPERB_Results.png)
## Citation
If you find our work useful, please cite [our paper]().
If you find our work useful, please cite [our paper](https://arxiv.org/abs/2110.05752).