2021-10-18 23:57:21 +03:00
# LiST (**Li**te **S**elf-**T**raining)
2021-08-18 00:20:58 +03:00
2021-08-18 00:34:01 +03:00
2021-10-18 23:57:21 +03:00
This is the implementation of the paper [LiST: Lite Self-training Makes Efficient Few-shot Learners ](https://arxiv.org/abs/2110.06274 ). LiST is short for **Li**te **S**elf-**T**raining.
2021-08-18 00:34:01 +03:00
2021-10-18 23:57:21 +03:00
## Overview
2021-10-19 00:10:43 +03:00
< img src = "./figs/list.png" width = "450" / >
2021-08-18 00:34:01 +03:00
2021-10-18 23:57:21 +03:00
## Setup Environment
### Install via pip:
1. create a conda environment running Python 3.7:
```
conda create --name LiST python=3.7
conda activate LiST
```
2. install the required dependencies:
2021-08-18 00:34:01 +03:00
```
pip install -r requirements.txt
```
2021-10-18 23:57:21 +03:00
### Use docker:
1. Pull docker </ br >
```
docker pull yaqing/pytorch-few-shot:v0.6
```
2. Run docker </ br >
```
docker run -it --rm --runtime nvidia yaqing/pytorch-few-shot:v0.6 bash
```
Please refer to the following link if you first use docker: https://docs.docker.com/
2021-08-18 00:34:01 +03:00
**NOTE**: Different versions of packages (like `pytorch` , `transformers` , etc.) may lead to different results from the paper. However, the trend should still hold no matter what versions of packages you use.
## Prepare the data
2021-10-18 23:57:21 +03:00
Please run the following commands to prepare data for experiments:
2021-08-18 00:34:01 +03:00
```bash
cd data
2021-09-14 06:48:01 +03:00
bash prepare_dataset.sh
2021-09-14 20:05:54 +03:00
cd ..
2021-10-01 06:03:13 +03:00
```
2021-09-28 05:48:18 +03:00
2021-10-18 23:05:47 +03:00
## Run the model
2021-09-28 05:48:18 +03:00
2021-10-18 23:05:47 +03:00
We prepare scripts to run tasks. Please use bash script under LiST directory.
2021-09-28 05:48:18 +03:00
2021-10-18 23:05:47 +03:00
Run LiST as:
2021-08-18 00:34:01 +03:00
2021-10-18 23:05:47 +03:00
```bash
bash run.sh
2021-09-14 20:05:54 +03:00
```
2021-10-18 23:57:21 +03:00
Note that we ran experiments on V100 GPU (32GB) for LiST models. You may need to reduce batch size for other GPUs.
#### Supported datasets:
MNLI, RTE, QQP, SST-2, subj and MPQA with shots of 10, 20, 30.
2021-09-14 20:05:54 +03:00
2021-10-18 23:05:47 +03:00
### Notes and Acknowledgments
2021-10-18 23:57:21 +03:00
The implementation is based on https://github.com/huggingface/transformers < br >
We also used some code from: https://github.com/princeton-nlp/LM-BFF
### How do I cite LiST?
```
@article {wang2021list,
title={LiST: Lite Self-training Makes Efficient Few-shot Learners},
author={Wang, Yaqing and Mukherjee, Subhabrata and Liu, Xiaodong and Gao, Jing and Awadallah, Ahmed Hassan and Gao, Jianfeng},
journal={arXiv preprint arXiv:2110.06274},
year={2021}
}
2021-10-19 00:10:43 +03:00
```