Merge pull request #62 from motus/motus/dns3/download

Add new downloader scripts and update the README files
This commit is contained in:
Vishak Gopal 2021-11-30 17:24:26 -08:00 коммит произвёл GitHub
Родитель c526fbfa9f fb4b074a0e
Коммит bb9eda12c0
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
9 изменённых файлов: 1138 добавлений и 253 удалений

244
README-DNS3.md Normal file
Просмотреть файл

@ -0,0 +1,244 @@
# Deep Noise Suppression (DNS) Challenge 3 - INTERSPEECH 2021
**NOTE:** This README describes the **PAST** DNS Challenge!
The data for it is still available, and is described below. If you are interested in the latest DNS
Challenge, please refer to the main [README.md](README.md) file.
## In this repository
This repository contains the datasets and scripts required for INTERSPEECH 2021 DNS Challenge, AKA
DNS Challenge 3, or DNS3. For more details about the challenge, please see our
[paper](https://arxiv.org/pdf/2101.01902.pdf) and the challenge
[website](https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-interspeech-2021/).
For more details on the testing framework, please visit [P.835](https://github.com/microsoft/P.808).
## Details
* The **datasets** directory is a placeholder for the wideband datasets. That is, our data
downloader script by default will place the downloader audio data here. After the download, this
directory will contain clean speech, noise, and room impulse responses required for creating the
training data for wideband scenario. The script will also download here the test set that
participants can use during the development stages.
* The **datasets_fullband** directory is a placeholder for the fullband audio data. The downloader
script will download here the datasets that contain clean speech and noise audio clips required
for creating training data for fullband scenario.
* The **NSNet2-baseline** directory contains the inference scripts and the ONNX model for the
baseline Speech Enhancement method for wideband.
* **download-dns-challenge-3.sh** - this is the script to download the data. By default, the data
will be placed into `datasets/` and `datasets_fullband/` directories. Please take a look at the
script and uncomment the perferred download method. Unmodified, the script performs a dry
run and retrieves only the HTTP headers for each archive.
* **noisyspeech_synthesizer_singleprocess.py** - is used to synthesize noisy-clean speech pairs for
training purposes.
* **noisyspeech_synthesizer.cfg** - is the configuration file used to synthesize the data. Users are
required to accurately specify different parameters and provide the right paths to the datasets
required to synthesize noisy speech.
* **audiolib.py** - contains modules required to synthesize datasets.
* **utils.py** - contains some utility functions required to synthesize the data.
* **unit_tests_synthesizer.py** - contains the unit tests to ensure sanity of the data.
* **requirements.txt** - contains all the libraries required for synthesizing the data.
## Datasets
The default directory structure and the sizes of the datasets available for DNS Challenge are:
```
datasets 229G
├── clean 204G
│   ├── emotional_speech 403M
│   ├── french_data 21G
│   ├── german_speech 66G
│   ├── italian_speech 14G
│   ├── mandarin_speech 21G
│   ├── read_speech 61G
│   ├── russian_speech 5.1G
│   ├── singing_voice 979M
│   └── spanish_speech 17G
├── dev_testset 211M
├── impulse_responses 4.3G
│   ├── SLR26 2.1G
│   └── SLR28 2.3G
└── noise 20G
```
And, for the fullband data,
```
datasets_fullband 600G
├── clean_fullband 542G
│   ├── VocalSet_48kHz_mono 974M
│   ├── emotional_speech 1.2G
│   ├── french_data 62G
│   ├── german_speech 194G
│   ├── italian_speech 42G
│   ├── read_speech 182G
│   ├── russian_speech 12G
│   └── spanish_speech 50G
├── dev_testset_fullband 630M
└── noise_fullband 58G
```
## Code prerequisites
- Python 3.6 and above
- Python libraries: soundfile, librosa
**NOTE:** git LFS is *no longer required* for DNS Challenge. Please use the
`download-dns-challenge-3.sh` script in this repo to download the data.
## Usage:
1. Install Python libraries
```bash
pip3 install soundfile librosa
```
2. Clone the repository.
```bash
git clone https://github.com/microsoft/DNS-Challenge
```
3. Edit **noisyspeech_synthesizer.cfg** to specify the required parameters described in the file and
include the paths to clean speech, noise and impulse response related csv files. Also, specify
the paths to the destination directories and store the logs.
4. Create dataset
```bash
python3 noisyspeech_synthesizer_singleprocess.py
```
## Citation:
If you use this dataset in a publication please cite the following paper:<br />
```BibTex
@inproceedings{reddy2021interspeech,
title={INTERSPEECH 2021 Deep Noise Suppression Challenge},
author={Reddy, Chandan KA and Dubey, Harishchandra and Koishida, Kazuhito and Nair, Arun and Gopal, Vishak and Cutler, Ross and Braun, Sebastian and Gamper, Hannes and Aichner, Robert and Srinivasan, Sriram},
booktitle={INTERSPEECH},
year={2021}
}
```
The baseline NSNet noise suppression:<br />
```BibTex
@inproceedings{9054254,
author={Y. {Xia} and S. {Braun} and C. K. A. {Reddy} and H. {Dubey} and R. {Cutler} and I. {Tashev}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP)},
title={Weighted Speech Distortion Losses for Neural-Network-Based Real-Time Speech Enhancement},
year={2020}, volume={}, number={}, pages={871-875},}
```
```BibTex
@misc{braun2020data,
title={Data augmentation and loss normalization for deep noise suppression},
author={Sebastian Braun and Ivan Tashev},
year={2020},
eprint={2008.06412},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
The P.835 test framework:<br />
```BibTex
@inproceedings{naderi2021crowdsourcing,
title={Subjective Evaluation of Noise Suppression Algorithms in Crowdsourcing},
author={Naderi, Babak and Cutler, Ross},
booktitle={INTERSPEECH},
year={2021}
}
```
DNSMOS API: <br />
```BibTex
@inproceedings{reddy2020dnsmos,
title={DNSMOS: A Non-Intrusive Perceptual Objective Speech Quality metric to evaluate Noise Suppressors},
author={Reddy, Chandan KA and Gopal, Vishak and Cutler, Ross},
booktitle={ICASSP},
year={2020}
}
```
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a
CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
# Legal Notices
Microsoft and any contributors grant you a license to the Microsoft documentation and other content
in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode),
see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the
[LICENSE-CODE](LICENSE-CODE) file.
Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the
documentation may be either trademarks or registered trademarks of Microsoft in the United States
and/or other countries. The licenses for this project do not grant you rights to use any Microsoft
names, logos, or trademarks. Microsoft's general trademark guidelines can be found at
http://go.microsoft.com/fwlink/?LinkID=254653.
Privacy information can be found at https://privacy.microsoft.com/en-us/
Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents,
or trademarks, whether by implication, estoppel or otherwise.
## Dataset licenses
MICROSOFT PROVIDES THE DATASETS ON AN "AS IS" BASIS. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, GUARANTEES OR CONDITIONS WITH RESPECT TO YOUR USE OF THE DATASETS. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAW, MICROSOFT DISCLAIMS ALL LIABILITY FOR ANY DAMAGES OR LOSSES, INLCUDING DIRECT, CONSEQUENTIAL, SPECIAL, INDIRECT, INCIDENTAL OR PUNITIVE, RESULTING FROM YOUR USE OF THE DATASETS.
The datasets are provided under the original terms that Microsoft received such datasets. See below for more information about each dataset.
The datasets used in this project are licensed as follows:
1. Clean speech:
* https://librivox.org/; License: https://librivox.org/pages/public-domain/
* PTDB-TUG: Pitch Tracking Database from Graz University of Technology https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html; License: http://opendatacommons.org/licenses/odbl/1.0/
* Edinburgh 56 speaker dataset: https://datashare.is.ed.ac.uk/handle/10283/2791; License: https://datashare.is.ed.ac.uk/bitstream/handle/10283/2791/license_text?sequence=11&isAllowed=y
* VocalSet: A Singing Voice Dataset https://zenodo.org/record/1193957#.X1hkxYtlCHs; License: Creative Commons Attribution 4.0 International
* Emotion data corpus: CREMA-D (Crowd-sourced Emotional Multimodal Actors Dataset)
https://github.com/CheyneyComputerScience/CREMA-D; License: http://opendatacommons.org/licenses/dbcl/1.0/
* The VoxCeleb2 Dataset http://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html; License: http://www.robots.ox.ac.uk/~vgg/data/voxceleb/
The VoxCeleb dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found here.
* VCTK Dataset: https://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html; License: This corpus is licensed under Open Data Commons Attribution License (ODC-By) v1.0.
http://opendatacommons.org/licenses/by/1.0/
2. Noise:
* Audioset: https://research.google.com/audioset/index.html; License: https://creativecommons.org/licenses/by/4.0/
* Freesound: https://freesound.org/ Only files with CC0 licenses were selected; License: https://creativecommons.org/publicdomain/zero/1.0/
* Demand: https://zenodo.org/record/1227121#.XRKKxYhKiUk; License: https://creativecommons.org/licenses/by-sa/3.0/deed.en_CA
3. RIR datasets: OpenSLR26 and OpenSLR28:
* http://www.openslr.org/26/
* http://www.openslr.org/28/
* License: Apache 2.0
## Code license
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE

457
README.md
Просмотреть файл

@ -1,233 +1,224 @@
# Deep Noise Suppression (DNS) Challenge - INTERSPEECH 2021
This repository contains the datasets and scripts required for the DNS challenge. For more details
about the challenge, please see our [paper](https://arxiv.org/pdf/2101.01902.pdf) and the challenge
[website](https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-interspeech-2021/).
For more details on the testing framework, please visit [P.835](https://github.com/microsoft/P.808).
## Repo details:
* The **datasets** directory is a placeholder for the wideband datasets. That is, our data
downloader script by default will place the downloader audio data here. After the download, this
directory will contain clean speech, noise, and room impulse responses required for creating the
training data for wideband scenario. The script will also download here the test set that
participants can use during the development stages.
* The **datasets_fullband** directory is a placeholder for the fullband audio data. The downloader
script will download here the datasets that contain clean speech, noise, and room impulse
responses required for creating training data for fullband scenario.
* The **NSNet2-baseline** directory contains the inference scripts and the ONNX model for the
baseline Speech Enhancement method for wideband.
* **dns_challenge_data_downloader.py** - this is the script to download the data. By default, the
data will be placed into `datasets/` and `datasets_fullband/` directories. Please send us an email
requesting the SAS_URL to be used in the script.
* **noisyspeech_synthesizer_singleprocess.py** - is used to synthesize noisy-clean speech pairs for
training purposes.
* **noisyspeech_synthesizer.cfg** - is the configuration file used to synthesize the data. Users are
required to accurately specify different parameters and provide the right paths to the datasets
required to synthesize noisy speech.
* **audiolib.py** - contains modules required to synthesize datasets.
* **utils.py** - contains some utility functions required to synthesize the data.
* **unit_tests_synthesizer.py** - contains the unit tests to ensure sanity of the data.
* **requirements.txt** - contains all the libraries required for synthesizing the data.
## Datasets
The default directory structure and the sizes of the datasets available for DNS Challenge are:
```
datasets 229G
├── clean 204G
│   ├── emotional_speech 403M
│   ├── french_data 21G
│   ├── german_speech 66G
│   ├── italian_speech 14G
│   ├── mandarin_speech 21G
│   ├── read_speech 61G
│   ├── russian_speech 5.1G
│   ├── singing_voice 979M
│   └── spanish_speech 17G
├── dev_testset 211M
├── impulse_responses 4.3G
│   ├── SLR26 2.1G
│   └── SLR28 2.3G
└── noise 20G
```
And, for the fullband data,
```
datasets_fullband 600G
├── clean_fullband 542G
│   ├── VocalSet_48kHz_mono 974M
│   ├── emotional_speech 1.2G
│   ├── french_data 62G
│   ├── german_speech 194G
│   ├── italian_speech 42G
│   ├── read_speech 182G
│   ├── russian_speech 12G
│   └── spanish_speech 50G
├── dev_testset_fullband 630M
└── noise_fullband 58G
```
## Code prerequisites
- Python 3.6 and above
- Python libraries: soundfile, librosa
**NOTE:** git LFS is *no longer required* for DNS Challenge. Please use the
`dns_challenge_data_downloader.py` script in this repo to download the data.
## Usage:
1. Install Python libraries
```bash
pip3 install soundfile librosa
```
2. Clone the repository.
```bash
git clone https://github.com/microsoft/DNS-Challenge
```
3. Edit **noisyspeech_synthesizer.cfg** to specify the required parameters described in the file and
include the paths to clean speech, noise and impulse response related csv files. Also, specify
the paths to the destination directories and store the logs.
4. Create dataset
```bash
python3 noisyspeech_synthesizer_singleprocess.py
```
## Citation:
If you use this dataset in a publication please cite the following paper:<br />
```BibTex
@inproceedings{reddy2021interspeech,
title={INTERSPEECH 2021 Deep Noise Suppression Challenge},
author={Reddy, Chandan KA and Dubey, Harishchandra and Koishida, Kazuhito and Nair, Arun and Gopal, Vishak and Cutler, Ross and Braun, Sebastian and Gamper, Hannes and Aichner, Robert and Srinivasan, Sriram},
booktitle={INTERSPEECH},
year={2021}
}
```
The baseline NSNet noise suppression:<br />
```BibTex
@inproceedings{9054254,
author={Y. {Xia} and S. {Braun} and C. K. A. {Reddy} and H. {Dubey} and R. {Cutler} and I. {Tashev}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP)},
title={Weighted Speech Distortion Losses for Neural-Network-Based Real-Time Speech Enhancement},
year={2020}, volume={}, number={}, pages={871-875},}
```
```BibTex
@misc{braun2020data,
title={Data augmentation and loss normalization for deep noise suppression},
author={Sebastian Braun and Ivan Tashev},
year={2020},
eprint={2008.06412},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
The P.835 test framework:<br />
```BibTex
@inproceedings{naderi2021crowdsourcing,
title={Subjective Evaluation of Noise Suppression Algorithms in Crowdsourcing},
author={Naderi, Babak and Cutler, Ross},
booktitle={INTERSPEECH},
year={2021}
}
```
DNSMOS API: <br />
```BibTex
@inproceedings{reddy2020dnsmos,
title={DNSMOS: A Non-Intrusive Perceptual Objective Speech Quality metric to evaluate Noise Suppressors},
author={Reddy, Chandan KA and Gopal, Vishak and Cutler, Ross},
booktitle={ICASSP},
year={2020}
}
```
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a
CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
# Legal Notices
Microsoft and any contributors grant you a license to the Microsoft documentation and other content
in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode),
see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the
[LICENSE-CODE](LICENSE-CODE) file.
Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the
documentation may be either trademarks or registered trademarks of Microsoft in the United States
and/or other countries. The licenses for this project do not grant you rights to use any Microsoft
names, logos, or trademarks. Microsoft's general trademark guidelines can be found at
http://go.microsoft.com/fwlink/?LinkID=254653.
Privacy information can be found at https://privacy.microsoft.com/en-us/
Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents,
or trademarks, whether by implication, estoppel or otherwise.
## Dataset licenses
MICROSOFT PROVIDES THE DATASETS ON AN "AS IS" BASIS. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, GUARANTEES OR CONDITIONS WITH RESPECT TO YOUR USE OF THE DATASETS. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAW, MICROSOFT DISCLAIMS ALL LIABILITY FOR ANY DAMAGES OR LOSSES, INLCUDING DIRECT, CONSEQUENTIAL, SPECIAL, INDIRECT, INCIDENTAL OR PUNITIVE, RESULTING FROM YOUR USE OF THE DATASETS.
The datasets are provided under the original terms that Microsoft received such datasets. See below for more information about each dataset.
The datasets used in this project are licensed as follows:
1. Clean speech:
* https://librivox.org/; License: https://librivox.org/pages/public-domain/
* PTDB-TUG: Pitch Tracking Database from Graz University of Technology https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html; License: http://opendatacommons.org/licenses/odbl/1.0/
* Edinburgh 56 speaker dataset: https://datashare.is.ed.ac.uk/handle/10283/2791; License: https://datashare.is.ed.ac.uk/bitstream/handle/10283/2791/license_text?sequence=11&isAllowed=y
* VocalSet: A Singing Voice Dataset https://zenodo.org/record/1193957#.X1hkxYtlCHs; License: Creative Commons Attribution 4.0 International
* Emotion data corpus: CREMA-D (Crowd-sourced Emotional Multimodal Actors Dataset)
https://github.com/CheyneyComputerScience/CREMA-D; License: http://opendatacommons.org/licenses/dbcl/1.0/
* The VoxCeleb2 Dataset http://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html; License: http://www.robots.ox.ac.uk/~vgg/data/voxceleb/
The VoxCeleb dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found here.
* VCTK Dataset: https://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html; License: This corpus is licensed under Open Data Commons Attribution License (ODC-By) v1.0.
http://opendatacommons.org/licenses/by/1.0/
2. Noise:
* Audioset: https://research.google.com/audioset/index.html; License: https://creativecommons.org/licenses/by/4.0/
* Freesound: https://freesound.org/ Only files with CC0 licenses were selected; License: https://creativecommons.org/publicdomain/zero/1.0/
* Demand: https://zenodo.org/record/1227121#.XRKKxYhKiUk; License: https://creativecommons.org/licenses/by-sa/3.0/deed.en_CA
3. RIR datasets: OpenSLR26 and OpenSLR28:
* http://www.openslr.org/26/
* http://www.openslr.org/28/
* License: Apache 2.0
## Code license
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
# Deep Noise Suppression (DNS) Challenge 4 - ICASSP 2022
## In this repository
This repository contains the datasets and scripts required for ICASSP 2022 DNS Challenge, AKA
DNS Challenge 4, or DNS4. For more details about the challenge, please see our
[paper](https://arxiv.org/pdf/2101.01902.pdf) and the challenge
[website](https://www.microsoft.com/en-us/research/academic-program/deep-noise-suppression-challenge-icassp-2022/).
For more details on the testing framework, please visit [P.835](https://github.com/microsoft/P.808).
## Details
* The **datasets** and **datasets_fullband** folders are placeholders for the datasets. That is, our
data downloader script by default will place the downloaded audio data there. After the download,
these directories will contain clean speech, noise, and room impulse responses required for
creating the training data for wideband scenario. The script will also download here the test set
that participants can use during the development stages.
* The **NSNet2-baseline** directory contains the inference scripts and the ONNX model for the
baseline Speech Enhancement method for wideband.
* **download-dns-challenge-4.sh** - this is the script to download the data. By default, the data
will be placed into `./datasets/` and `./datasets_fullband/` directories. Please take a look at
the script and uncomment the perferred download method. Unmodified, the script performs a dry run
and retrieves only the HTTP headers for each archive.
* **noisyspeech_synthesizer_singleprocess.py** - is used to synthesize noisy-clean speech pairs for
training purposes.
* **noisyspeech_synthesizer.cfg** - is the configuration file used to synthesize the data. Users are
required to accurately specify different parameters and provide the right paths to the datasets
required to synthesize noisy speech.
* **audiolib.py** - contains modules required to synthesize datasets.
* **utils.py** - contains some utility functions required to synthesize the data.
* **unit_tests_synthesizer.py** - contains the unit tests to ensure sanity of the data.
* **requirements.txt** - contains all the libraries required for synthesizing the data.
## Datasets
The default directory structure and the sizes of the datasets available for DNS Challenge are:
```
. 855G
+-- datasets 4.3G
| \-- impulse_responses 4.3G
\-- datasets_fullband 850G
+-- emotional_speech 2.3G
+-- french_speech 63G
+-- german_speech 263G
+-- italian_speech 39G
+-- read_speech 300G
+-- russian_speech 12G
+-- spanish_speech 66G
+-- vctk_wav48_silence_trimmed 39G
+-- VocalSet_48kHz_mono 1G
+-- dev_testset 3G
| +-- enrollment_data 644M
| \-- noisy_testclips 2.4G
\-- noise_fullband 60G
```
In all, you will need at least 855GB to store the UNPACKED data. Archived, the same data takes about
510GB total.
## Code prerequisites
- Python 3.6 and above
- Python libraries: soundfile, librosa
**NOTE:** git LFS is *no longer required* for DNS Challenge. Please use the
`download-dns-challenge-4.sh` script in this repo to download the data.
## Usage:
1. Install Python libraries
```bash
pip3 install soundfile librosa
```
2. Clone the repository.
```bash
git clone https://github.com/microsoft/DNS-Challenge
```
3. Edit **noisyspeech_synthesizer.cfg** to specify the required parameters described in the file and
include the paths to clean speech, noise and impulse response related csv files. Also, specify
the paths to the destination directories and store the logs.
4. Create dataset
```bash
python3 noisyspeech_synthesizer_singleprocess.py
```
## Citation:
If you use this dataset in a publication please cite the following paper:<br />
```BibTex
@inproceedings{reddy2021interspeech,
title={INTERSPEECH 2021 Deep Noise Suppression Challenge},
author={Reddy, Chandan KA and Dubey, Harishchandra and Koishida, Kazuhito and Nair, Arun and Gopal, Vishak and Cutler, Ross and Braun, Sebastian and Gamper, Hannes and Aichner, Robert and Srinivasan, Sriram},
booktitle={INTERSPEECH},
year={2021}
}
```
The baseline NSNet noise suppression:<br />
```BibTex
@inproceedings{9054254,
author={Y. {Xia} and S. {Braun} and C. K. A. {Reddy} and H. {Dubey} and R. {Cutler} and I. {Tashev}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP)},
title={Weighted Speech Distortion Losses for Neural-Network-Based Real-Time Speech Enhancement},
year={2020}, volume={}, number={}, pages={871-875},}
```
```BibTex
@misc{braun2020data,
title={Data augmentation and loss normalization for deep noise suppression},
author={Sebastian Braun and Ivan Tashev},
year={2020},
eprint={2008.06412},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
The P.835 test framework:<br />
```BibTex
@inproceedings{naderi2021crowdsourcing,
title={Subjective Evaluation of Noise Suppression Algorithms in Crowdsourcing},
author={Naderi, Babak and Cutler, Ross},
booktitle={INTERSPEECH},
year={2021}
}
```
DNSMOS API: <br />
```BibTex
@inproceedings{reddy2020dnsmos,
title={DNSMOS: A Non-Intrusive Perceptual Objective Speech Quality metric to evaluate Noise Suppressors},
author={Reddy, Chandan KA and Gopal, Vishak and Cutler, Ross},
booktitle={ICASSP},
year={2020}
}
```
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a
CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
# Legal Notices
Microsoft and any contributors grant you a license to the Microsoft documentation and other content
in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode),
see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the
[LICENSE-CODE](LICENSE-CODE) file.
Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the
documentation may be either trademarks or registered trademarks of Microsoft in the United States
and/or other countries. The licenses for this project do not grant you rights to use any Microsoft
names, logos, or trademarks. Microsoft's general trademark guidelines can be found at
http://go.microsoft.com/fwlink/?LinkID=254653.
Privacy information can be found at https://privacy.microsoft.com/en-us/
Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents,
or trademarks, whether by implication, estoppel or otherwise.
## Dataset licenses
MICROSOFT PROVIDES THE DATASETS ON AN "AS IS" BASIS. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, GUARANTEES OR CONDITIONS WITH RESPECT TO YOUR USE OF THE DATASETS. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAW, MICROSOFT DISCLAIMS ALL LIABILITY FOR ANY DAMAGES OR LOSSES, INLCUDING DIRECT, CONSEQUENTIAL, SPECIAL, INDIRECT, INCIDENTAL OR PUNITIVE, RESULTING FROM YOUR USE OF THE DATASETS.
The datasets are provided under the original terms that Microsoft received such datasets. See below for more information about each dataset.
The datasets used in this project are licensed as follows:
1. Clean speech:
* https://librivox.org/; License: https://librivox.org/pages/public-domain/
* PTDB-TUG: Pitch Tracking Database from Graz University of Technology https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html; License: http://opendatacommons.org/licenses/odbl/1.0/
* Edinburgh 56 speaker dataset: https://datashare.is.ed.ac.uk/handle/10283/2791; License: https://datashare.is.ed.ac.uk/bitstream/handle/10283/2791/license_text?sequence=11&isAllowed=y
* VocalSet: A Singing Voice Dataset https://zenodo.org/record/1193957#.X1hkxYtlCHs; License: Creative Commons Attribution 4.0 International
* Emotion data corpus: CREMA-D (Crowd-sourced Emotional Multimodal Actors Dataset)
https://github.com/CheyneyComputerScience/CREMA-D; License: http://opendatacommons.org/licenses/dbcl/1.0/
* The VoxCeleb2 Dataset http://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html; License: http://www.robots.ox.ac.uk/~vgg/data/voxceleb/
The VoxCeleb dataset is available to download for commercial/research purposes under a Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. A complete version of the license can be found here.
* VCTK Dataset: https://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html; License: This corpus is licensed under Open Data Commons Attribution License (ODC-By) v1.0.
http://opendatacommons.org/licenses/by/1.0/
2. Noise:
* Audioset: https://research.google.com/audioset/index.html; License: https://creativecommons.org/licenses/by/4.0/
* Freesound: https://freesound.org/ Only files with CC0 licenses were selected; License: https://creativecommons.org/publicdomain/zero/1.0/
* Demand: https://zenodo.org/record/1227121#.XRKKxYhKiUk; License: https://creativecommons.org/licenses/by-sa/3.0/deed.en_CA
3. RIR datasets: OpenSLR26 and OpenSLR28:
* http://www.openslr.org/26/
* http://www.openslr.org/28/
* License: Apache 2.0
## Code license
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE

4
datasets/.gitignore поставляемый
Просмотреть файл

@ -1,4 +1,6 @@
/clean/
/dev_testset/
/impulse_responses/
/noise/
/noise/
*.tar.bz2
*.zip

Просмотреть файл

@ -1,4 +1,11 @@
# Wideband Datasets
# Deep Noise Suppression (DNS) Challenge 3 - INTERSPEECH 2021
**NOTE:** This README describes the **PAST** DNS Challenge!
The data for it is still available, and is described below. If you are interested in the latest DNS Challenge, please refer to the main [README.md](/README.md) file.
## Wideband Datasets
This directory is the default location where the **wideband** datasets will be downloaded to and
stored. After the download, you will see the following directory structure:
```
@ -22,7 +29,7 @@ datasets 229G
## Downloading the data
Datasets will be downloaded when you run the `dns_challenge_downloader.py` script. Note that the
Datasets will be downloaded when you run the `download-dns-challenge-3.sh` script. Note that the
data is no longer part of this git repository and git LFS is not required.
## Datasets for training

4
datasets_fullband/.gitignore поставляемый
Просмотреть файл

@ -1,3 +1,5 @@
/clean_fullband/
/dev_testset_fullband/
/noise_fullband/
/noise_fullband/
*.tar.bz2
*.zip

Просмотреть файл

@ -1,3 +1,10 @@
# Deep Noise Suppression (DNS) Challenge 3 - INTERSPEECH 2021
**NOTE:** This README describes the **PAST** DNS Challenge!
The data for it is still available, and is described below. If you are interested in the latest DNS Challenge, please refer to the main [README.md](/README.md) file.
# Fullband datasets
This directory is the default location where the **fullband** datasets will be downloaded to and
stored. After the download, you will see the following directory structure:
@ -17,7 +24,7 @@ datasets_fullband 600G
```
## Downloading the data
Datasets will be downloaded when you run the `dns_challenge_downloader.py` script. Note that the
Datasets will be downloaded when you run the `download-dns-challenge-3.sh` script. Note that the
data is no longer part of this git repository and git LFS is not required.
## Datasets for training
@ -64,7 +71,7 @@ The branch and the file do not exist in git history. Is that the right URL?
* The chosen noise types are more relevant to VOIP applications.
### Room Impulse Responses (RIR)
Please use the impulse responses in the wideband dataset, as described in the [datasets/README.md](/datasets/README.md) file.
Please use the impulse responses in the wideband dataset, as described in the [datasets/README-DNS3.md](/datasets/README-DNS3.md) file.
### Acoustic Parameters
Acoustic parameters' data is available in git at

Просмотреть файл

@ -1,14 +0,0 @@
import subprocess
azcopyexe = r"<Insert your path to azcopy.exe>"
#azcopyexe = r"C:\Users\chkarada\Downloads\Softwares\azcopy_windows_amd64_10.8.0\azcopy.exe"
# For wideband data - Uncomment the below line and comment line 10 if you want wideband data
SAS_URL = "<Send an email to dns_challenge@microsoft.com for the SAS URL >"
# Insert the path to your local directory where you want to save the data
local_dir = r"<Insert your path to the destination directory>"
#local_dir = r"C:\Downloads\"
command_azcopy = "{0} cp {1} {2} --recursive".format(azcopyexe, SAS_URL, local_dir)
subprocess.call(command_azcopy)

149
download-dns-challenge-3.sh Normal file
Просмотреть файл

@ -0,0 +1,149 @@
#!/usr/bin/bash
# ***** Datasets for INTERSPEECH 2021 DNS Challenge 3 *****
# NOTE: This data is for the *PAST* challenge!
# Current DNS Challenge is ICASSP 2022 DNS Challenge 4, which
# has its own download script, `download-dns-challenge-4.sh`
# NOTE: Before downloading, make sure you have enough space
# on your local storage!
# In all, you will need at least 830GB to store UNPACKED data.
# Archived, the same data takes 512GB total.
# Please comment out the files you don't need before launching
# the script.
# NOTE: By default, the script *DOES NOT* DOWNLOAD ANY FILES!
# Please scroll down and edit this script to pick the
# downloading method that works best for you.
# -------------------------------------------------------------
# The directory structure of the unpacked data is:
# *** Wideband data: ***
# datasets 229G
# +-- clean 204G
# | +-- emotional_speech 403M
# | +-- french_data 21G
# | +-- german_speech 66G
# | +-- italian_speech 14G
# | +-- mandarin_speech 21G
# | +-- read_speech 61G
# | +-- russian_speech 5.1G
# | +-- singing_voice 979M
# | \-- spanish_speech 17G
# +-- dev_testset 211M
# +-- impulse_responses 4.3G
# | +-- SLR26 2.1G
# | \-- SLR28 2.3G
# \-- noise 20G
# *** Fullband data: ***
# datasets_fullband 600G
# +-- clean_fullband 542G
# | +-- VocalSet_48kHz_mono 974M
# | +-- emotional_speech 1.2G
# | +-- french_data 62G
# | +-- german_speech 194G
# | +-- italian_speech 42G
# | +-- read_speech 182G
# | +-- russian_speech 12G
# | \-- spanish_speech 50G
# +-- dev_testset_fullband 630M
# \-- noise_fullband 58G
BLOB_NAMES=(
# DEMAND dataset
DEMAND.tar.bz2
# Wideband clean speech
datasets/datasets.clean.read_speech.tar.bz2
# Wideband emotional speech
datasets/datasets.clean.emotional_speech.tar.bz2
# Wideband non-English clean speech
datasets/datasets.clean.french_data.tar.bz2
datasets/datasets.clean.german_speech.tar.bz2
datasets/datasets.clean.italian_speech.tar.bz2
datasets/datasets.clean.mandarin_speech.tar.bz2
datasets/datasets.clean.russian_speech.tar.bz2
datasets/datasets.clean.singing_voice.tar.bz2
datasets/datasets.clean.spanish_speech.tar.bz2
# Wideband noise, IR, and test data
datasets/datasets.impulse_responses.tar.bz2
datasets/datasets.noise.tar.bz2
datasets/datasets.dev_testset.tar.bz2
# ---------------------------------------------------------
# Fullband clean speech
datasets_fullband/datasets_fullband.clean_fullband.read_speech.0.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.read_speech.1.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.read_speech.2.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.read_speech.3.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.VocalSet_48kHz_mono.tar.bz2
# Fullband emotional speech
datasets_fullband/datasets_fullband.clean_fullband.emotional_speech.tar.bz2
# Fullband non-English clean speech
datasets_fullband/datasets_fullband.clean_fullband.french_data.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.0.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.1.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.2.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.3.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.4.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.5.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.6.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.7.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.8.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.9.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.10.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.11.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.12.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.13.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.14.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.15.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.16.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.17.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.18.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.german_speech.19.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.italian_speech.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.russian_speech.tar.bz2
datasets_fullband/datasets_fullband.clean_fullband.spanish_speech.tar.bz2
# Fullband noise and test data
datasets_fullband/datasets_fullband.noise_fullband.tar.bz2
datasets_fullband/datasets_fullband.dev_testset_fullband.tar.bz2
)
###############################################################
AZURE_URL="https://dns3public.blob.core.windows.net/dns3archive"
mkdir -p ./datasets ./datasets_fullband
for BLOB in ${BLOB_NAMES[@]}
do
URL="$AZURE_URL/$BLOB"
echo "Download: $BLOB"
# DRY RUN: print HTTP headers WITHOUT downloading the files
curl -s -I "$URL" | head -n 1
# Actually download the files - UNCOMMENT it when ready to download
# curl "$URL" -o "$BLOB"
# Same as above, but using wget
# wget "$URL" -O "$BLOB"
# Same, + unpack files on the fly
# curl "$URL" | tar fjxv -
done

497
download-dns-challenge-4.sh Normal file
Просмотреть файл

@ -0,0 +1,497 @@
#!/usr/bin/bash
# ***** Datasets for ICASSP 2022 DNS Challenge 4 *****
# NOTE: Before downloading, make sure you have enough space
# on your local storage!
# In all, you will need at least 855GB to store the UNPACKED data.
# Archived, the same data takes 510GB total.
# Please comment out the files you don't need before launching
# the script.
# NOTE: By default, the script *DOES NOT* DOWNLOAD ANY FILES!
# Please scroll down and edit this script to pick the
# downloading method that works best for you.
# -------------------------------------------------------------
# The directory structure of the unpacked data is:
# . 855G
# +-- datasets 4.3G
# | \-- impulse_responses 4.3G
# \-- datasets_fullband 850G
# +-- emotional_speech 2.3G
# +-- french_speech 63G
# +-- german_speech 263G
# +-- italian_speech 39G
# +-- read_speech 300G
# +-- russian_speech 12G
# +-- spanish_speech 66G
# +-- vctk_wav48_silence_trimmed 39G
# +-- VocalSet_48kHz_mono 1G
# +-- dev_testset 3G
# | +-- enrollment_data 644M
# | \-- noisy_testclips 2.4G
# \-- noise_fullband 60G
BLOB_NAMES=(
datasets_fullband/read_speech_000_0.00_3.72.tar.bz2
datasets_fullband/read_speech_001_3.72_3.85.tar.bz2
datasets_fullband/read_speech_002_3.85_3.93.tar.bz2
datasets_fullband/read_speech_003_3.93_3.98.tar.bz2
datasets_fullband/read_speech_004_3.98_4.02.tar.bz2
datasets_fullband/read_speech_005_4.02_4.06.tar.bz2
datasets_fullband/read_speech_006_4.06_4.09.tar.bz2
datasets_fullband/read_speech_007_4.09_4.12.tar.bz2
datasets_fullband/read_speech_008_4.12_4.15.tar.bz2
datasets_fullband/read_speech_009_4.15_4.17.tar.bz2
datasets_fullband/read_speech_010_4.17_4.19.tar.bz2
datasets_fullband/read_speech_011_4.19_4.22.tar.bz2
datasets_fullband/read_speech_012_4.22_4.24.tar.bz2
datasets_fullband/read_speech_013_4.24_4.25.tar.bz2
datasets_fullband/read_speech_014_4.25_4.27.tar.bz2
datasets_fullband/read_speech_015_4.27_4.29.tar.bz2
datasets_fullband/read_speech_016_4.29_4.31.tar.bz2
datasets_fullband/read_speech_017_4.31_4.33.tar.bz2
datasets_fullband/read_speech_018_4.33_4.35.tar.bz2
datasets_fullband/read_speech_019_4.35_4.37.tar.bz2
datasets_fullband/read_speech_020_4.37_4.39.tar.bz2
datasets_fullband/read_speech_021_4.39_4.41.tar.bz2
datasets_fullband/read_speech_022_4.41_4.43.tar.bz2
datasets_fullband/read_speech_023_4.43_4.45.tar.bz2
datasets_fullband/read_speech_024_4.45_4.48.tar.bz2
datasets_fullband/read_speech_025_4.48_4.51.tar.bz2
datasets_fullband/read_speech_026_4.51_4.54.tar.bz2
datasets_fullband/read_speech_027_4.54_4.59.tar.bz2
datasets_fullband/read_speech_028_4.59_4.71.tar.bz2
datasets_fullband/read_speech_029_4.71_NA.tar.bz2
datasets_fullband/read_speech_030_NA_NA.tar.bz2
datasets_fullband/read_speech_031_NA_NA.tar.bz2
datasets_fullband/read_speech_032_NA_NA.tar.bz2
datasets_fullband/read_speech_033_NA_NA.tar.bz2
datasets_fullband/read_speech_034_NA_NA.tar.bz2
datasets_fullband/read_speech_035_NA_NA.tar.bz2
datasets_fullband/read_speech_036_NA_NA.tar.bz2
datasets_fullband/read_speech_037_NA_NA.tar.bz2
datasets_fullband/read_speech_038_NA_NA.tar.bz2
datasets_fullband/read_speech_039_NA_NA.tar.bz2
datasets_fullband/read_speech_040_NA_NA.tar.bz2
datasets_fullband/read_speech_041_NA_NA.tar.bz2
datasets_fullband/read_speech_042_NA_NA.tar.bz2
datasets_fullband/read_speech_043_NA_NA.tar.bz2
datasets_fullband/read_speech_044_NA_NA.tar.bz2
datasets_fullband/read_speech_045_NA_NA.tar.bz2
datasets_fullband/read_speech_046_NA_NA.tar.bz2
datasets_fullband/read_speech_047_NA_NA.tar.bz2
datasets_fullband/read_speech_048_NA_NA.tar.bz2
datasets_fullband/read_speech_049_NA_NA.tar.bz2
datasets_fullband/read_speech_050_NA_NA.tar.bz2
datasets_fullband/read_speech_051_NA_NA.tar.bz2
datasets_fullband/read_speech_052_NA_NA.tar.bz2
datasets_fullband/read_speech_053_NA_NA.tar.bz2
datasets_fullband/read_speech_054_NA_NA.tar.bz2
datasets_fullband/read_speech_055_NA_NA.tar.bz2
datasets_fullband/read_speech_056_NA_NA.tar.bz2
datasets_fullband/read_speech_057_NA_NA.tar.bz2
datasets_fullband/read_speech_058_NA_NA.tar.bz2
datasets_fullband/read_speech_059_NA_NA.tar.bz2
datasets_fullband/read_speech_060_NA_NA.tar.bz2
datasets_fullband/read_speech_061_NA_NA.tar.bz2
datasets_fullband/read_speech_062_NA_NA.tar.bz2
datasets_fullband/read_speech_063_NA_NA.tar.bz2
datasets_fullband/read_speech_064_NA_NA.tar.bz2
datasets_fullband/read_speech_065_NA_NA.tar.bz2
datasets_fullband/read_speech_066_NA_NA.tar.bz2
datasets_fullband/read_speech_067_NA_NA.tar.bz2
datasets_fullband/read_speech_068_NA_NA.tar.bz2
datasets_fullband/read_speech_069_NA_NA.tar.bz2
datasets_fullband/read_speech_070_NA_NA.tar.bz2
datasets_fullband/read_speech_071_NA_NA.tar.bz2
datasets_fullband/read_speech_072_NA_NA.tar.bz2
datasets_fullband/read_speech_073_NA_NA.tar.bz2
datasets_fullband/read_speech_074_NA_NA.tar.bz2
datasets_fullband/read_speech_075_NA_NA.tar.bz2
datasets_fullband/read_speech_076_NA_NA.tar.bz2
datasets_fullband/read_speech_077_NA_NA.tar.bz2
datasets_fullband/read_speech_078_NA_NA.tar.bz2
datasets_fullband/read_speech_079_NA_NA.tar.bz2
datasets_fullband/read_speech_080_NA_NA.tar.bz2
datasets_fullband/read_speech_081_NA_NA.tar.bz2
datasets_fullband/read_speech_082_NA_NA.tar.bz2
datasets_fullband/read_speech_083_NA_NA.tar.bz2
datasets_fullband/read_speech_084_NA_NA.tar.bz2
datasets_fullband/read_speech_085_NA_NA.tar.bz2
datasets_fullband/read_speech_086_NA_NA.tar.bz2
datasets_fullband/read_speech_087_NA_NA.tar.bz2
datasets_fullband/read_speech_088_NA_NA.tar.bz2
datasets_fullband/read_speech_089_NA_NA.tar.bz2
datasets_fullband/read_speech_090_NA_NA.tar.bz2
datasets_fullband/read_speech_091_NA_NA.tar.bz2
datasets_fullband/read_speech_092_NA_NA.tar.bz2
datasets_fullband/read_speech_093_NA_NA.tar.bz2
datasets_fullband/read_speech_094_NA_NA.tar.bz2
datasets_fullband/read_speech_095_NA_NA.tar.bz2
datasets_fullband/read_speech_096_NA_NA.tar.bz2
datasets_fullband/read_speech_097_NA_NA.tar.bz2
datasets_fullband/read_speech_098_NA_NA.tar.bz2
datasets_fullband/read_speech_099_NA_NA.tar.bz2
datasets_fullband/read_speech_100_NA_NA.tar.bz2
datasets_fullband/read_speech_101_NA_NA.tar.bz2
datasets_fullband/read_speech_102_NA_NA.tar.bz2
datasets_fullband/read_speech_103_NA_NA.tar.bz2
datasets_fullband/read_speech_104_NA_NA.tar.bz2
datasets_fullband/read_speech_105_NA_NA.tar.bz2
datasets_fullband/read_speech_106_NA_NA.tar.bz2
datasets_fullband/read_speech_107_NA_NA.tar.bz2
datasets_fullband/read_speech_108_NA_NA.tar.bz2
datasets_fullband/read_speech_109_NA_NA.tar.bz2
datasets_fullband/read_speech_110_NA_NA.tar.bz2
datasets_fullband/read_speech_111_NA_NA.tar.bz2
datasets_fullband/read_speech_112_NA_NA.tar.bz2
datasets_fullband/read_speech_113_NA_NA.tar.bz2
datasets_fullband/read_speech_114_NA_NA.tar.bz2
datasets_fullband/read_speech_115_NA_NA.tar.bz2
datasets_fullband/read_speech_116_NA_NA.tar.bz2
datasets_fullband/read_speech_117_NA_NA.tar.bz2
datasets_fullband/read_speech_118_NA_NA.tar.bz2
datasets_fullband/read_speech_119_NA_NA.tar.bz2
datasets_fullband/read_speech_120_NA_NA.tar.bz2
datasets_fullband/read_speech_121_NA_NA.tar.bz2
datasets_fullband/read_speech_122_NA_NA.tar.bz2
datasets_fullband/read_speech_123_NA_NA.tar.bz2
datasets_fullband/read_speech_124_NA_NA.tar.bz2
datasets_fullband/read_speech_125_NA_NA.tar.bz2
datasets_fullband/read_speech_126_NA_NA.tar.bz2
datasets_fullband/read_speech_127_NA_NA.tar.bz2
datasets_fullband/read_speech_128_NA_NA.tar.bz2
datasets_fullband/read_speech_129_NA_NA.tar.bz2
datasets_fullband/read_speech_130_NA_NA.tar.bz2
datasets_fullband/read_speech_131_NA_NA.tar.bz2
datasets_fullband/read_speech_132_NA_NA.tar.bz2
datasets_fullband/read_speech_133_NA_NA.tar.bz2
datasets_fullband/read_speech_134_NA_NA.tar.bz2
datasets_fullband/read_speech_135_NA_NA.tar.bz2
datasets_fullband/read_speech_136_NA_NA.tar.bz2
datasets_fullband/read_speech_137_NA_NA.tar.bz2
datasets_fullband/read_speech_138_NA_NA.tar.bz2
datasets_fullband/read_speech_139_NA_NA.tar.bz2
datasets_fullband/read_speech_140_NA_NA.tar.bz2
datasets_fullband/read_speech_141_NA_NA.tar.bz2
datasets_fullband/read_speech_142_NA_NA.tar.bz2
datasets_fullband/read_speech_143_NA_NA.tar.bz2
datasets_fullband/read_speech_144_NA_NA.tar.bz2
datasets_fullband/read_speech_145_NA_NA.tar.bz2
datasets_fullband/read_speech_146_NA_NA.tar.bz2
datasets_fullband/read_speech_147_NA_NA.tar.bz2
datasets_fullband/read_speech_148_NA_NA.tar.bz2
datasets_fullband/read_speech_149_NA_NA.tar.bz2
datasets_fullband/read_speech_150_NA_NA.tar.bz2
datasets_fullband/read_speech_151_NA_NA.tar.bz2
datasets_fullband/read_speech_152_NA_NA.tar.bz2
datasets_fullband/read_speech_153_NA_NA.tar.bz2
datasets_fullband/read_speech_154_NA_NA.tar.bz2
datasets_fullband/read_speech_155_NA_NA.tar.bz2
datasets_fullband/read_speech_156_NA_NA.tar.bz2
datasets_fullband/read_speech_157_NA_NA.tar.bz2
datasets_fullband/read_speech_158_NA_NA.tar.bz2
datasets_fullband/read_speech_159_NA_NA.tar.bz2
datasets_fullband/french_speech_000_NA_NA.tar.bz2
datasets_fullband/french_speech_001_NA_NA.tar.bz2
datasets_fullband/french_speech_002_NA_NA.tar.bz2
datasets_fullband/french_speech_003_NA_NA.tar.bz2
datasets_fullband/french_speech_004_NA_NA.tar.bz2
datasets_fullband/french_speech_005_NA_NA.tar.bz2
datasets_fullband/french_speech_006_NA_NA.tar.bz2
datasets_fullband/french_speech_007_NA_NA.tar.bz2
datasets_fullband/french_speech_008_NA_NA.tar.bz2
datasets_fullband/french_speech_009_NA_NA.tar.bz2
datasets_fullband/french_speech_010_NA_NA.tar.bz2
datasets_fullband/french_speech_011_NA_NA.tar.bz2
datasets_fullband/french_speech_012_NA_NA.tar.bz2
datasets_fullband/french_speech_013_NA_NA.tar.bz2
datasets_fullband/french_speech_014_NA_NA.tar.bz2
datasets_fullband/french_speech_015_NA_NA.tar.bz2
datasets_fullband/french_speech_016_NA_NA.tar.bz2
datasets_fullband/french_speech_017_NA_NA.tar.bz2
datasets_fullband/french_speech_018_NA_NA.tar.bz2
datasets_fullband/french_speech_019_NA_NA.tar.bz2
datasets_fullband/french_speech_020_NA_NA.tar.bz2
datasets_fullband/french_speech_021_NA_NA.tar.bz2
datasets_fullband/french_speech_022_NA_NA.tar.bz2
datasets_fullband/french_speech_023_NA_NA.tar.bz2
datasets_fullband/french_speech_024_NA_NA.tar.bz2
datasets_fullband/french_speech_025_NA_NA.tar.bz2
datasets_fullband/french_speech_026_NA_NA.tar.bz2
datasets_fullband/french_speech_027_NA_NA.tar.bz2
datasets_fullband/french_speech_028_NA_NA.tar.bz2
datasets_fullband/french_speech_029_NA_NA.tar.bz2
datasets_fullband/french_speech_030_NA_NA.tar.bz2
datasets_fullband/french_speech_031_NA_NA.tar.bz2
datasets_fullband/french_speech_032_NA_NA.tar.bz2
datasets_fullband/german_speech_000_0.00_3.56.tar.bz2
datasets_fullband/german_speech_001_3.56_3.73.tar.bz2
datasets_fullband/german_speech_002_3.73_3.84.tar.bz2
datasets_fullband/german_speech_003_3.84_3.91.tar.bz2
datasets_fullband/german_speech_004_3.91_3.98.tar.bz2
datasets_fullband/german_speech_005_3.98_4.04.tar.bz2
datasets_fullband/german_speech_006_4.04_4.10.tar.bz2
datasets_fullband/german_speech_007_4.10_4.17.tar.bz2
datasets_fullband/german_speech_008_4.17_4.25.tar.bz2
datasets_fullband/german_speech_009_4.25_4.35.tar.bz2
datasets_fullband/german_speech_010_4.35_NA.tar.bz2
datasets_fullband/german_speech_011_NA_NA.tar.bz2
datasets_fullband/german_speech_012_NA_NA.tar.bz2
datasets_fullband/german_speech_013_NA_NA.tar.bz2
datasets_fullband/german_speech_014_NA_NA.tar.bz2
datasets_fullband/german_speech_015_NA_NA.tar.bz2
datasets_fullband/german_speech_016_NA_NA.tar.bz2
datasets_fullband/german_speech_017_NA_NA.tar.bz2
datasets_fullband/german_speech_018_NA_NA.tar.bz2
datasets_fullband/german_speech_019_NA_NA.tar.bz2
datasets_fullband/german_speech_020_NA_NA.tar.bz2
datasets_fullband/german_speech_021_NA_NA.tar.bz2
datasets_fullband/german_speech_022_NA_NA.tar.bz2
datasets_fullband/german_speech_023_NA_NA.tar.bz2
datasets_fullband/german_speech_024_NA_NA.tar.bz2
datasets_fullband/german_speech_025_NA_NA.tar.bz2
datasets_fullband/german_speech_026_NA_NA.tar.bz2
datasets_fullband/german_speech_027_NA_NA.tar.bz2
datasets_fullband/german_speech_028_NA_NA.tar.bz2
datasets_fullband/german_speech_029_NA_NA.tar.bz2
datasets_fullband/german_speech_030_NA_NA.tar.bz2
datasets_fullband/german_speech_031_NA_NA.tar.bz2
datasets_fullband/german_speech_032_NA_NA.tar.bz2
datasets_fullband/german_speech_033_NA_NA.tar.bz2
datasets_fullband/german_speech_034_NA_NA.tar.bz2
datasets_fullband/german_speech_035_NA_NA.tar.bz2
datasets_fullband/german_speech_036_NA_NA.tar.bz2
datasets_fullband/german_speech_037_NA_NA.tar.bz2
datasets_fullband/german_speech_038_NA_NA.tar.bz2
datasets_fullband/german_speech_039_NA_NA.tar.bz2
datasets_fullband/german_speech_040_NA_NA.tar.bz2
datasets_fullband/german_speech_041_NA_NA.tar.bz2
datasets_fullband/german_speech_042_NA_NA.tar.bz2
datasets_fullband/german_speech_043_NA_NA.tar.bz2
datasets_fullband/german_speech_044_NA_NA.tar.bz2
datasets_fullband/german_speech_045_NA_NA.tar.bz2
datasets_fullband/german_speech_046_NA_NA.tar.bz2
datasets_fullband/german_speech_047_NA_NA.tar.bz2
datasets_fullband/german_speech_048_NA_NA.tar.bz2
datasets_fullband/german_speech_049_NA_NA.tar.bz2
datasets_fullband/german_speech_050_NA_NA.tar.bz2
datasets_fullband/german_speech_051_NA_NA.tar.bz2
datasets_fullband/german_speech_052_NA_NA.tar.bz2
datasets_fullband/german_speech_053_NA_NA.tar.bz2
datasets_fullband/german_speech_054_NA_NA.tar.bz2
datasets_fullband/german_speech_055_NA_NA.tar.bz2
datasets_fullband/german_speech_056_NA_NA.tar.bz2
datasets_fullband/german_speech_057_NA_NA.tar.bz2
datasets_fullband/german_speech_058_NA_NA.tar.bz2
datasets_fullband/german_speech_059_NA_NA.tar.bz2
datasets_fullband/german_speech_060_NA_NA.tar.bz2
datasets_fullband/german_speech_061_NA_NA.tar.bz2
datasets_fullband/german_speech_062_NA_NA.tar.bz2
datasets_fullband/german_speech_063_NA_NA.tar.bz2
datasets_fullband/german_speech_064_NA_NA.tar.bz2
datasets_fullband/german_speech_065_NA_NA.tar.bz2
datasets_fullband/german_speech_066_NA_NA.tar.bz2
datasets_fullband/german_speech_067_NA_NA.tar.bz2
datasets_fullband/german_speech_068_NA_NA.tar.bz2
datasets_fullband/german_speech_069_NA_NA.tar.bz2
datasets_fullband/german_speech_070_NA_NA.tar.bz2
datasets_fullband/german_speech_071_NA_NA.tar.bz2
datasets_fullband/german_speech_072_NA_NA.tar.bz2
datasets_fullband/german_speech_073_NA_NA.tar.bz2
datasets_fullband/german_speech_074_NA_NA.tar.bz2
datasets_fullband/german_speech_075_NA_NA.tar.bz2
datasets_fullband/german_speech_076_NA_NA.tar.bz2
datasets_fullband/german_speech_077_NA_NA.tar.bz2
datasets_fullband/german_speech_078_NA_NA.tar.bz2
datasets_fullband/german_speech_079_NA_NA.tar.bz2
datasets_fullband/german_speech_080_NA_NA.tar.bz2
datasets_fullband/german_speech_081_NA_NA.tar.bz2
datasets_fullband/german_speech_082_NA_NA.tar.bz2
datasets_fullband/german_speech_083_NA_NA.tar.bz2
datasets_fullband/german_speech_084_NA_NA.tar.bz2
datasets_fullband/german_speech_085_NA_NA.tar.bz2
datasets_fullband/german_speech_086_NA_NA.tar.bz2
datasets_fullband/german_speech_087_NA_NA.tar.bz2
datasets_fullband/german_speech_088_NA_NA.tar.bz2
datasets_fullband/german_speech_089_NA_NA.tar.bz2
datasets_fullband/german_speech_090_NA_NA.tar.bz2
datasets_fullband/german_speech_091_NA_NA.tar.bz2
datasets_fullband/german_speech_092_NA_NA.tar.bz2
datasets_fullband/german_speech_093_NA_NA.tar.bz2
datasets_fullband/german_speech_094_NA_NA.tar.bz2
datasets_fullband/german_speech_095_NA_NA.tar.bz2
datasets_fullband/german_speech_096_NA_NA.tar.bz2
datasets_fullband/german_speech_097_NA_NA.tar.bz2
datasets_fullband/german_speech_098_NA_NA.tar.bz2
datasets_fullband/german_speech_099_NA_NA.tar.bz2
datasets_fullband/german_speech_100_NA_NA.tar.bz2
datasets_fullband/german_speech_101_NA_NA.tar.bz2
datasets_fullband/german_speech_102_NA_NA.tar.bz2
datasets_fullband/german_speech_103_NA_NA.tar.bz2
datasets_fullband/german_speech_104_NA_NA.tar.bz2
datasets_fullband/german_speech_105_NA_NA.tar.bz2
datasets_fullband/german_speech_106_NA_NA.tar.bz2
datasets_fullband/german_speech_107_NA_NA.tar.bz2
datasets_fullband/german_speech_108_NA_NA.tar.bz2
datasets_fullband/german_speech_109_NA_NA.tar.bz2
datasets_fullband/german_speech_110_NA_NA.tar.bz2
datasets_fullband/german_speech_111_NA_NA.tar.bz2
datasets_fullband/german_speech_112_NA_NA.tar.bz2
datasets_fullband/german_speech_113_NA_NA.tar.bz2
datasets_fullband/german_speech_114_NA_NA.tar.bz2
datasets_fullband/german_speech_115_NA_NA.tar.bz2
datasets_fullband/german_speech_116_NA_NA.tar.bz2
datasets_fullband/german_speech_117_NA_NA.tar.bz2
datasets_fullband/german_speech_118_NA_NA.tar.bz2
datasets_fullband/german_speech_119_NA_NA.tar.bz2
datasets_fullband/german_speech_120_NA_NA.tar.bz2
datasets_fullband/german_speech_121_NA_NA.tar.bz2
datasets_fullband/german_speech_122_NA_NA.tar.bz2
datasets_fullband/german_speech_123_NA_NA.tar.bz2
datasets_fullband/german_speech_124_NA_NA.tar.bz2
datasets_fullband/german_speech_125_NA_NA.tar.bz2
datasets_fullband/german_speech_126_NA_NA.tar.bz2
datasets_fullband/german_speech_127_NA_NA.tar.bz2
datasets_fullband/german_speech_128_NA_NA.tar.bz2
datasets_fullband/german_speech_129_NA_NA.tar.bz2
datasets_fullband/german_speech_130_NA_NA.tar.bz2
datasets_fullband/german_speech_131_NA_NA.tar.bz2
datasets_fullband/german_speech_132_NA_NA.tar.bz2
datasets_fullband/german_speech_133_NA_NA.tar.bz2
datasets_fullband/german_speech_134_NA_NA.tar.bz2
datasets_fullband/german_speech_135_NA_NA.tar.bz2
datasets_fullband/german_speech_136_NA_NA.tar.bz2
datasets_fullband/german_speech_137_NA_NA.tar.bz2
datasets_fullband/german_speech_138_NA_NA.tar.bz2
datasets_fullband/german_speech_139_NA_NA.tar.bz2
datasets_fullband/german_speech_140_NA_NA.tar.bz2
datasets_fullband/italian_speech_000_0.00_3.97.tar.bz2
datasets_fullband/italian_speech_001_3.97_4.19.tar.bz2
datasets_fullband/italian_speech_002_4.19_4.36.tar.bz2
datasets_fullband/italian_speech_003_4.36_4.64.tar.bz2
datasets_fullband/italian_speech_004_4.64_NA.tar.bz2
datasets_fullband/italian_speech_005_NA_NA.tar.bz2
datasets_fullband/italian_speech_006_NA_NA.tar.bz2
datasets_fullband/italian_speech_007_NA_NA.tar.bz2
datasets_fullband/italian_speech_008_NA_NA.tar.bz2
datasets_fullband/italian_speech_009_NA_NA.tar.bz2
datasets_fullband/italian_speech_010_NA_NA.tar.bz2
datasets_fullband/italian_speech_011_NA_NA.tar.bz2
datasets_fullband/italian_speech_012_NA_NA.tar.bz2
datasets_fullband/italian_speech_013_NA_NA.tar.bz2
datasets_fullband/italian_speech_014_NA_NA.tar.bz2
datasets_fullband/italian_speech_015_NA_NA.tar.bz2
datasets_fullband/italian_speech_016_NA_NA.tar.bz2
datasets_fullband/italian_speech_017_NA_NA.tar.bz2
datasets_fullband/italian_speech_018_NA_NA.tar.bz2
datasets_fullband/italian_speech_019_NA_NA.tar.bz2
datasets_fullband/italian_speech_020_NA_NA.tar.bz2
datasets_fullband/russian_speech_000_0.00_4.26.tar.bz2
datasets_fullband/russian_speech_001_4.26_NA.tar.bz2
datasets_fullband/russian_speech_002_NA_NA.tar.bz2
datasets_fullband/russian_speech_003_NA_NA.tar.bz2
datasets_fullband/russian_speech_004_NA_NA.tar.bz2
datasets_fullband/russian_speech_005_NA_NA.tar.bz2
datasets_fullband/russian_speech_006_NA_NA.tar.bz2
datasets_fullband/spanish_speech_000_0.00_4.02.tar.bz2
datasets_fullband/spanish_speech_001_4.02_4.37.tar.bz2
datasets_fullband/spanish_speech_002_4.37_NA.tar.bz2
datasets_fullband/spanish_speech_003_NA_NA.tar.bz2
datasets_fullband/spanish_speech_004_NA_NA.tar.bz2
datasets_fullband/spanish_speech_005_NA_NA.tar.bz2
datasets_fullband/spanish_speech_006_NA_NA.tar.bz2
datasets_fullband/spanish_speech_007_NA_NA.tar.bz2
datasets_fullband/spanish_speech_008_NA_NA.tar.bz2
datasets_fullband/spanish_speech_009_NA_NA.tar.bz2
datasets_fullband/spanish_speech_010_NA_NA.tar.bz2
datasets_fullband/spanish_speech_011_NA_NA.tar.bz2
datasets_fullband/spanish_speech_012_NA_NA.tar.bz2
datasets_fullband/spanish_speech_013_NA_NA.tar.bz2
datasets_fullband/spanish_speech_014_NA_NA.tar.bz2
datasets_fullband/spanish_speech_015_NA_NA.tar.bz2
datasets_fullband/spanish_speech_016_NA_NA.tar.bz2
datasets_fullband/spanish_speech_017_NA_NA.tar.bz2
datasets_fullband/spanish_speech_018_NA_NA.tar.bz2
datasets_fullband/spanish_speech_019_NA_NA.tar.bz2
datasets_fullband/spanish_speech_020_NA_NA.tar.bz2
datasets_fullband/spanish_speech_021_NA_NA.tar.bz2
datasets_fullband/spanish_speech_022_NA_NA.tar.bz2
datasets_fullband/spanish_speech_023_NA_NA.tar.bz2
datasets_fullband/spanish_speech_024_NA_NA.tar.bz2
datasets_fullband/spanish_speech_025_NA_NA.tar.bz2
datasets_fullband/spanish_speech_026_NA_NA.tar.bz2
datasets_fullband/spanish_speech_027_NA_NA.tar.bz2
datasets_fullband/spanish_speech_028_NA_NA.tar.bz2
datasets_fullband/spanish_speech_029_NA_NA.tar.bz2
datasets_fullband/spanish_speech_030_NA_NA.tar.bz2
datasets_fullband/spanish_speech_031_NA_NA.tar.bz2
datasets_fullband/spanish_speech_032_NA_NA.tar.bz2
datasets_fullband/spanish_speech_033_NA_NA.tar.bz2
datasets_fullband/spanish_speech_034_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_000_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_001_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_002_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_003_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_004_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_005_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_006_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_007_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_008_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_009_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_010_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_011_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_012_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_013_NA_NA.tar.bz2
datasets_fullband/vctk_wav48_silence_trimmed_014_NA_NA.tar.bz2
datasets_fullband/VocalSet_48kHz_mono.tar.bz2
datasets_fullband/emotional_speech.tar.bz2
datasets.impulse_responses.tar.bz2
datasets_fullband.dev_testset.enrollment_data.tar.bz2
datasets_fullband.dev_testset.noisy_testclips.tar.bz2
datasets_fullband.noise_fullband.tar.bz2
)
###############################################################
AZURE_URL="https://dns4public.blob.core.windows.net/dns4archive"
OUTPUT_PATH="."
mkdir -p $OUTPUT_PATH/{datasets,datasets_fullband}
for BLOB in ${BLOB_NAMES[@]}
do
URL="$AZURE_URL/$BLOB"
echo "Download: $BLOB"
# DRY RUN: print HTTP response and Content-Length
# WITHOUT downloading the files
curl -s -I "$URL" | head -n 2
# Actually download the files: UNCOMMENT when ready to download
# curl "$URL" -o "$OUTPUT_PATH/$BLOB"
# Same as above, but using wget
# wget "$URL" -O "$OUTPUT_PATH/$BLOB"
# Same, + unpack files on the fly
# curl "$URL" | tar -C "$OUTPUT_PATH" fjxv -
done