Add files via upload
This commit is contained in:
Родитель
ccb1eeb2bd
Коммит
4db12ac10b
74
README.md
74
README.md
|
@ -1,14 +1,66 @@
|
|||
# Project
|
||||
## DIGITS EXPERIMENTS
|
||||
|
||||
> This repo has been populated by an initial template to help get you started. Please
|
||||
> make sure to update the content to build a great experience for community-building.
|
||||
To download the datasets, you can e.g. unpack the following [file](https://github.com/thuml/CDAN/blob/master/data/usps2mnist/images.tar.gz) in data/digits.
|
||||
For the sake of speed when running experiments, the code generates a pkl file containing the whole datasets (when first run) and then loads it at runtime.
|
||||
|
||||
As the maintainer of this project, please make a few updates:
|
||||
To run the code on the original datasets, run:
|
||||
|
||||
- Improving this README.MD file to provide a great experience
|
||||
- Updating SUPPORT.MD with content about this project's support experience
|
||||
- Understanding the security reporting process in SECURITY.MD
|
||||
- Remove this section from the README
|
||||
`python train_digits.py {method} --task {task}`
|
||||
|
||||
where method belongs to: [`CDAN`, `CDAN-E`, `DANN`, `IWDAN`, `NANN`, `IWDANORACLE`, `IWCDAN`, `IWCDANORACLE`, `IWCDAN-E`, `IWCDAN-EORACLE`] and task is either mnist2usps or usps2mnist.
|
||||
|
||||
To run the code on the subsampled datasets, run:
|
||||
|
||||
`python train_digits.py {method} --task {task} --ratio 1`
|
||||
|
||||
To reproduce Figs. 1 and 2, run the following command with various seeds:
|
||||
|
||||
`python train_digits.py {method} --ratio {ratio}`
|
||||
|
||||
where ratio belongs to 100 <= ratio < 150 (to subsample the target) or to 200 <= ratio < 250 (to subsample the source). Each corresponds to a given subsampling, the exact fractions can be found in the subsampling list of `data_list.txt.
|
||||
|
||||
## VISDA AND OFFICE DATASETS
|
||||
|
||||
The Visda dataset can be found here: https://github.com/VisionLearningGroup/taskcv-2017-public.
|
||||
|
||||
The Office-31 dataset can be found here: https://people.eecs.berkeley.edu/~jhoffman/domainadapt.
|
||||
|
||||
The Office-Home dataset can be found here: http://hemanthdv.org/OfficeHome-Dataset.
|
||||
|
||||
They should be downloaded and placed in the corresponding folder under data. The code will generate a test file once for faster evaluation, it might take a while during the first visda run.
|
||||
|
||||
### Discriminator based methods
|
||||
|
||||
To run the code on the original datasets, run:
|
||||
|
||||
`python train_image.py {method} --dset {dset} --s_dset_file {s_dset_file} --t_dset_file {t_dset_file}`
|
||||
|
||||
where:
|
||||
- `method` belongs to [`CDAN`, `CDAN-E`, `DANN`, `IWDAN`, `NANN`, `IWDANORACLE`, `IWCDAN`, `IWCDANORACLE`, `IWCDAN-E`, `IWCDAN-EORACLE`]
|
||||
- `dset` belongs to [`visda`, `office-31`, `office-home`]
|
||||
- `s_dset_file` corresponds to the source domain, the filename can be found in the corresponding data folder, e.g. `dslr_list.txt` (not needed for VISDA)
|
||||
- `t_dset_file` corresponds to the target domain, the filename can be found in the corresponding data folder, e.g. `amazon_list.txt` (not needed for VISDA).
|
||||
|
||||
To run the code on the subsampled datasets, run the same command with `--ratio 1` appended to it.
|
||||
|
||||
### MMD-based methods
|
||||
|
||||
To run the MMD algorithms (e.g. IWJAN), use the same commands as above with the `train_mmd.py` file.
|
||||
|
||||
## Reference
|
||||
|
||||
Please use the following bibtex entry if you use this code for SPIBB:
|
||||
|
||||
```
|
||||
@misc{tachet2020domain,
|
||||
title={Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift},
|
||||
author={Tachet des Combes, Remi and Zhao, Han and Wang, Yu-Xiang and Gordon, Geoff},
|
||||
year={2020},
|
||||
eprint={2003.04475},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.LG}
|
||||
}
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
|
@ -26,8 +78,8 @@ contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additio
|
|||
|
||||
## Trademarks
|
||||
|
||||
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
|
||||
trademarks or logos is subject to and must follow
|
||||
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
|
||||
trademarks or logos is subject to and must follow
|
||||
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
|
||||
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
|
||||
Any use of third-party trademarks or logos are subject to those third-party's policies.
|
||||
Any use of third-party trademarks or logos are subject to those third-party's policies.
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,498 @@
|
|||
dslr/images/calculator/frame_0001.jpg 5
|
||||
dslr/images/calculator/frame_0002.jpg 5
|
||||
dslr/images/calculator/frame_0003.jpg 5
|
||||
dslr/images/calculator/frame_0004.jpg 5
|
||||
dslr/images/calculator/frame_0005.jpg 5
|
||||
dslr/images/calculator/frame_0006.jpg 5
|
||||
dslr/images/calculator/frame_0007.jpg 5
|
||||
dslr/images/calculator/frame_0008.jpg 5
|
||||
dslr/images/calculator/frame_0009.jpg 5
|
||||
dslr/images/calculator/frame_0010.jpg 5
|
||||
dslr/images/calculator/frame_0011.jpg 5
|
||||
dslr/images/calculator/frame_0012.jpg 5
|
||||
dslr/images/ring_binder/frame_0001.jpg 24
|
||||
dslr/images/ring_binder/frame_0002.jpg 24
|
||||
dslr/images/ring_binder/frame_0003.jpg 24
|
||||
dslr/images/ring_binder/frame_0004.jpg 24
|
||||
dslr/images/ring_binder/frame_0005.jpg 24
|
||||
dslr/images/ring_binder/frame_0006.jpg 24
|
||||
dslr/images/ring_binder/frame_0007.jpg 24
|
||||
dslr/images/ring_binder/frame_0008.jpg 24
|
||||
dslr/images/ring_binder/frame_0009.jpg 24
|
||||
dslr/images/ring_binder/frame_0010.jpg 24
|
||||
dslr/images/printer/frame_0001.jpg 21
|
||||
dslr/images/printer/frame_0002.jpg 21
|
||||
dslr/images/printer/frame_0003.jpg 21
|
||||
dslr/images/printer/frame_0004.jpg 21
|
||||
dslr/images/printer/frame_0005.jpg 21
|
||||
dslr/images/printer/frame_0006.jpg 21
|
||||
dslr/images/printer/frame_0007.jpg 21
|
||||
dslr/images/printer/frame_0008.jpg 21
|
||||
dslr/images/printer/frame_0009.jpg 21
|
||||
dslr/images/printer/frame_0010.jpg 21
|
||||
dslr/images/printer/frame_0011.jpg 21
|
||||
dslr/images/printer/frame_0012.jpg 21
|
||||
dslr/images/printer/frame_0013.jpg 21
|
||||
dslr/images/printer/frame_0014.jpg 21
|
||||
dslr/images/printer/frame_0015.jpg 21
|
||||
dslr/images/keyboard/frame_0001.jpg 11
|
||||
dslr/images/keyboard/frame_0002.jpg 11
|
||||
dslr/images/keyboard/frame_0003.jpg 11
|
||||
dslr/images/keyboard/frame_0004.jpg 11
|
||||
dslr/images/keyboard/frame_0005.jpg 11
|
||||
dslr/images/keyboard/frame_0006.jpg 11
|
||||
dslr/images/keyboard/frame_0007.jpg 11
|
||||
dslr/images/keyboard/frame_0008.jpg 11
|
||||
dslr/images/keyboard/frame_0009.jpg 11
|
||||
dslr/images/keyboard/frame_0010.jpg 11
|
||||
dslr/images/scissors/frame_0001.jpg 26
|
||||
dslr/images/scissors/frame_0002.jpg 26
|
||||
dslr/images/scissors/frame_0003.jpg 26
|
||||
dslr/images/scissors/frame_0004.jpg 26
|
||||
dslr/images/scissors/frame_0005.jpg 26
|
||||
dslr/images/scissors/frame_0006.jpg 26
|
||||
dslr/images/scissors/frame_0007.jpg 26
|
||||
dslr/images/scissors/frame_0008.jpg 26
|
||||
dslr/images/scissors/frame_0009.jpg 26
|
||||
dslr/images/scissors/frame_0010.jpg 26
|
||||
dslr/images/scissors/frame_0011.jpg 26
|
||||
dslr/images/scissors/frame_0012.jpg 26
|
||||
dslr/images/scissors/frame_0013.jpg 26
|
||||
dslr/images/scissors/frame_0014.jpg 26
|
||||
dslr/images/scissors/frame_0015.jpg 26
|
||||
dslr/images/scissors/frame_0016.jpg 26
|
||||
dslr/images/scissors/frame_0017.jpg 26
|
||||
dslr/images/scissors/frame_0018.jpg 26
|
||||
dslr/images/laptop_computer/frame_0001.jpg 12
|
||||
dslr/images/laptop_computer/frame_0002.jpg 12
|
||||
dslr/images/laptop_computer/frame_0003.jpg 12
|
||||
dslr/images/laptop_computer/frame_0004.jpg 12
|
||||
dslr/images/laptop_computer/frame_0005.jpg 12
|
||||
dslr/images/laptop_computer/frame_0006.jpg 12
|
||||
dslr/images/laptop_computer/frame_0007.jpg 12
|
||||
dslr/images/laptop_computer/frame_0008.jpg 12
|
||||
dslr/images/laptop_computer/frame_0009.jpg 12
|
||||
dslr/images/laptop_computer/frame_0010.jpg 12
|
||||
dslr/images/laptop_computer/frame_0011.jpg 12
|
||||
dslr/images/laptop_computer/frame_0012.jpg 12
|
||||
dslr/images/laptop_computer/frame_0013.jpg 12
|
||||
dslr/images/laptop_computer/frame_0014.jpg 12
|
||||
dslr/images/laptop_computer/frame_0015.jpg 12
|
||||
dslr/images/laptop_computer/frame_0016.jpg 12
|
||||
dslr/images/laptop_computer/frame_0017.jpg 12
|
||||
dslr/images/laptop_computer/frame_0018.jpg 12
|
||||
dslr/images/laptop_computer/frame_0019.jpg 12
|
||||
dslr/images/laptop_computer/frame_0020.jpg 12
|
||||
dslr/images/laptop_computer/frame_0021.jpg 12
|
||||
dslr/images/laptop_computer/frame_0022.jpg 12
|
||||
dslr/images/laptop_computer/frame_0023.jpg 12
|
||||
dslr/images/laptop_computer/frame_0024.jpg 12
|
||||
dslr/images/mouse/frame_0001.jpg 16
|
||||
dslr/images/mouse/frame_0002.jpg 16
|
||||
dslr/images/mouse/frame_0003.jpg 16
|
||||
dslr/images/mouse/frame_0004.jpg 16
|
||||
dslr/images/mouse/frame_0005.jpg 16
|
||||
dslr/images/mouse/frame_0006.jpg 16
|
||||
dslr/images/mouse/frame_0007.jpg 16
|
||||
dslr/images/mouse/frame_0008.jpg 16
|
||||
dslr/images/mouse/frame_0009.jpg 16
|
||||
dslr/images/mouse/frame_0010.jpg 16
|
||||
dslr/images/mouse/frame_0011.jpg 16
|
||||
dslr/images/mouse/frame_0012.jpg 16
|
||||
dslr/images/monitor/frame_0001.jpg 15
|
||||
dslr/images/monitor/frame_0002.jpg 15
|
||||
dslr/images/monitor/frame_0003.jpg 15
|
||||
dslr/images/monitor/frame_0004.jpg 15
|
||||
dslr/images/monitor/frame_0005.jpg 15
|
||||
dslr/images/monitor/frame_0006.jpg 15
|
||||
dslr/images/monitor/frame_0007.jpg 15
|
||||
dslr/images/monitor/frame_0008.jpg 15
|
||||
dslr/images/monitor/frame_0009.jpg 15
|
||||
dslr/images/monitor/frame_0010.jpg 15
|
||||
dslr/images/monitor/frame_0011.jpg 15
|
||||
dslr/images/monitor/frame_0012.jpg 15
|
||||
dslr/images/monitor/frame_0013.jpg 15
|
||||
dslr/images/monitor/frame_0014.jpg 15
|
||||
dslr/images/monitor/frame_0015.jpg 15
|
||||
dslr/images/monitor/frame_0016.jpg 15
|
||||
dslr/images/monitor/frame_0017.jpg 15
|
||||
dslr/images/monitor/frame_0018.jpg 15
|
||||
dslr/images/monitor/frame_0019.jpg 15
|
||||
dslr/images/monitor/frame_0020.jpg 15
|
||||
dslr/images/monitor/frame_0021.jpg 15
|
||||
dslr/images/monitor/frame_0022.jpg 15
|
||||
dslr/images/mug/frame_0001.jpg 17
|
||||
dslr/images/mug/frame_0002.jpg 17
|
||||
dslr/images/mug/frame_0003.jpg 17
|
||||
dslr/images/mug/frame_0004.jpg 17
|
||||
dslr/images/mug/frame_0005.jpg 17
|
||||
dslr/images/mug/frame_0006.jpg 17
|
||||
dslr/images/mug/frame_0007.jpg 17
|
||||
dslr/images/mug/frame_0008.jpg 17
|
||||
dslr/images/tape_dispenser/frame_0001.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0002.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0003.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0004.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0005.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0006.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0007.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0008.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0009.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0010.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0011.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0012.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0013.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0014.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0015.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0016.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0017.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0018.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0019.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0020.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0021.jpg 29
|
||||
dslr/images/tape_dispenser/frame_0022.jpg 29
|
||||
dslr/images/pen/frame_0001.jpg 19
|
||||
dslr/images/pen/frame_0002.jpg 19
|
||||
dslr/images/pen/frame_0003.jpg 19
|
||||
dslr/images/pen/frame_0004.jpg 19
|
||||
dslr/images/pen/frame_0005.jpg 19
|
||||
dslr/images/pen/frame_0006.jpg 19
|
||||
dslr/images/pen/frame_0007.jpg 19
|
||||
dslr/images/pen/frame_0008.jpg 19
|
||||
dslr/images/pen/frame_0009.jpg 19
|
||||
dslr/images/pen/frame_0010.jpg 19
|
||||
dslr/images/bike/frame_0001.jpg 1
|
||||
dslr/images/bike/frame_0002.jpg 1
|
||||
dslr/images/bike/frame_0003.jpg 1
|
||||
dslr/images/bike/frame_0004.jpg 1
|
||||
dslr/images/bike/frame_0005.jpg 1
|
||||
dslr/images/bike/frame_0006.jpg 1
|
||||
dslr/images/bike/frame_0007.jpg 1
|
||||
dslr/images/bike/frame_0008.jpg 1
|
||||
dslr/images/bike/frame_0009.jpg 1
|
||||
dslr/images/bike/frame_0010.jpg 1
|
||||
dslr/images/bike/frame_0011.jpg 1
|
||||
dslr/images/bike/frame_0012.jpg 1
|
||||
dslr/images/bike/frame_0013.jpg 1
|
||||
dslr/images/bike/frame_0014.jpg 1
|
||||
dslr/images/bike/frame_0015.jpg 1
|
||||
dslr/images/bike/frame_0016.jpg 1
|
||||
dslr/images/bike/frame_0017.jpg 1
|
||||
dslr/images/bike/frame_0018.jpg 1
|
||||
dslr/images/bike/frame_0019.jpg 1
|
||||
dslr/images/bike/frame_0020.jpg 1
|
||||
dslr/images/bike/frame_0021.jpg 1
|
||||
dslr/images/punchers/frame_0001.jpg 23
|
||||
dslr/images/punchers/frame_0002.jpg 23
|
||||
dslr/images/punchers/frame_0003.jpg 23
|
||||
dslr/images/punchers/frame_0004.jpg 23
|
||||
dslr/images/punchers/frame_0005.jpg 23
|
||||
dslr/images/punchers/frame_0006.jpg 23
|
||||
dslr/images/punchers/frame_0007.jpg 23
|
||||
dslr/images/punchers/frame_0008.jpg 23
|
||||
dslr/images/punchers/frame_0009.jpg 23
|
||||
dslr/images/punchers/frame_0010.jpg 23
|
||||
dslr/images/punchers/frame_0011.jpg 23
|
||||
dslr/images/punchers/frame_0012.jpg 23
|
||||
dslr/images/punchers/frame_0013.jpg 23
|
||||
dslr/images/punchers/frame_0014.jpg 23
|
||||
dslr/images/punchers/frame_0015.jpg 23
|
||||
dslr/images/punchers/frame_0016.jpg 23
|
||||
dslr/images/punchers/frame_0017.jpg 23
|
||||
dslr/images/punchers/frame_0018.jpg 23
|
||||
dslr/images/back_pack/frame_0001.jpg 0
|
||||
dslr/images/back_pack/frame_0002.jpg 0
|
||||
dslr/images/back_pack/frame_0003.jpg 0
|
||||
dslr/images/back_pack/frame_0004.jpg 0
|
||||
dslr/images/back_pack/frame_0005.jpg 0
|
||||
dslr/images/back_pack/frame_0006.jpg 0
|
||||
dslr/images/back_pack/frame_0007.jpg 0
|
||||
dslr/images/back_pack/frame_0008.jpg 0
|
||||
dslr/images/back_pack/frame_0009.jpg 0
|
||||
dslr/images/back_pack/frame_0010.jpg 0
|
||||
dslr/images/back_pack/frame_0011.jpg 0
|
||||
dslr/images/back_pack/frame_0012.jpg 0
|
||||
dslr/images/desktop_computer/frame_0001.jpg 8
|
||||
dslr/images/desktop_computer/frame_0002.jpg 8
|
||||
dslr/images/desktop_computer/frame_0003.jpg 8
|
||||
dslr/images/desktop_computer/frame_0004.jpg 8
|
||||
dslr/images/desktop_computer/frame_0005.jpg 8
|
||||
dslr/images/desktop_computer/frame_0006.jpg 8
|
||||
dslr/images/desktop_computer/frame_0007.jpg 8
|
||||
dslr/images/desktop_computer/frame_0008.jpg 8
|
||||
dslr/images/desktop_computer/frame_0009.jpg 8
|
||||
dslr/images/desktop_computer/frame_0010.jpg 8
|
||||
dslr/images/desktop_computer/frame_0011.jpg 8
|
||||
dslr/images/desktop_computer/frame_0012.jpg 8
|
||||
dslr/images/desktop_computer/frame_0013.jpg 8
|
||||
dslr/images/desktop_computer/frame_0014.jpg 8
|
||||
dslr/images/desktop_computer/frame_0015.jpg 8
|
||||
dslr/images/speaker/frame_0001.jpg 27
|
||||
dslr/images/speaker/frame_0002.jpg 27
|
||||
dslr/images/speaker/frame_0003.jpg 27
|
||||
dslr/images/speaker/frame_0004.jpg 27
|
||||
dslr/images/speaker/frame_0005.jpg 27
|
||||
dslr/images/speaker/frame_0006.jpg 27
|
||||
dslr/images/speaker/frame_0007.jpg 27
|
||||
dslr/images/speaker/frame_0008.jpg 27
|
||||
dslr/images/speaker/frame_0009.jpg 27
|
||||
dslr/images/speaker/frame_0010.jpg 27
|
||||
dslr/images/speaker/frame_0011.jpg 27
|
||||
dslr/images/speaker/frame_0012.jpg 27
|
||||
dslr/images/speaker/frame_0013.jpg 27
|
||||
dslr/images/speaker/frame_0014.jpg 27
|
||||
dslr/images/speaker/frame_0015.jpg 27
|
||||
dslr/images/speaker/frame_0016.jpg 27
|
||||
dslr/images/speaker/frame_0017.jpg 27
|
||||
dslr/images/speaker/frame_0018.jpg 27
|
||||
dslr/images/speaker/frame_0019.jpg 27
|
||||
dslr/images/speaker/frame_0020.jpg 27
|
||||
dslr/images/speaker/frame_0021.jpg 27
|
||||
dslr/images/speaker/frame_0022.jpg 27
|
||||
dslr/images/speaker/frame_0023.jpg 27
|
||||
dslr/images/speaker/frame_0024.jpg 27
|
||||
dslr/images/speaker/frame_0025.jpg 27
|
||||
dslr/images/speaker/frame_0026.jpg 27
|
||||
dslr/images/mobile_phone/frame_0001.jpg 14
|
||||
dslr/images/mobile_phone/frame_0002.jpg 14
|
||||
dslr/images/mobile_phone/frame_0003.jpg 14
|
||||
dslr/images/mobile_phone/frame_0004.jpg 14
|
||||
dslr/images/mobile_phone/frame_0005.jpg 14
|
||||
dslr/images/mobile_phone/frame_0006.jpg 14
|
||||
dslr/images/mobile_phone/frame_0007.jpg 14
|
||||
dslr/images/mobile_phone/frame_0008.jpg 14
|
||||
dslr/images/mobile_phone/frame_0009.jpg 14
|
||||
dslr/images/mobile_phone/frame_0010.jpg 14
|
||||
dslr/images/mobile_phone/frame_0011.jpg 14
|
||||
dslr/images/mobile_phone/frame_0012.jpg 14
|
||||
dslr/images/mobile_phone/frame_0013.jpg 14
|
||||
dslr/images/mobile_phone/frame_0014.jpg 14
|
||||
dslr/images/mobile_phone/frame_0015.jpg 14
|
||||
dslr/images/mobile_phone/frame_0016.jpg 14
|
||||
dslr/images/mobile_phone/frame_0017.jpg 14
|
||||
dslr/images/mobile_phone/frame_0018.jpg 14
|
||||
dslr/images/mobile_phone/frame_0019.jpg 14
|
||||
dslr/images/mobile_phone/frame_0020.jpg 14
|
||||
dslr/images/mobile_phone/frame_0021.jpg 14
|
||||
dslr/images/mobile_phone/frame_0022.jpg 14
|
||||
dslr/images/mobile_phone/frame_0023.jpg 14
|
||||
dslr/images/mobile_phone/frame_0024.jpg 14
|
||||
dslr/images/mobile_phone/frame_0025.jpg 14
|
||||
dslr/images/mobile_phone/frame_0026.jpg 14
|
||||
dslr/images/mobile_phone/frame_0027.jpg 14
|
||||
dslr/images/mobile_phone/frame_0028.jpg 14
|
||||
dslr/images/mobile_phone/frame_0029.jpg 14
|
||||
dslr/images/mobile_phone/frame_0030.jpg 14
|
||||
dslr/images/mobile_phone/frame_0031.jpg 14
|
||||
dslr/images/paper_notebook/frame_0001.jpg 18
|
||||
dslr/images/paper_notebook/frame_0002.jpg 18
|
||||
dslr/images/paper_notebook/frame_0003.jpg 18
|
||||
dslr/images/paper_notebook/frame_0004.jpg 18
|
||||
dslr/images/paper_notebook/frame_0005.jpg 18
|
||||
dslr/images/paper_notebook/frame_0006.jpg 18
|
||||
dslr/images/paper_notebook/frame_0007.jpg 18
|
||||
dslr/images/paper_notebook/frame_0008.jpg 18
|
||||
dslr/images/paper_notebook/frame_0009.jpg 18
|
||||
dslr/images/paper_notebook/frame_0010.jpg 18
|
||||
dslr/images/ruler/frame_0001.jpg 25
|
||||
dslr/images/ruler/frame_0002.jpg 25
|
||||
dslr/images/ruler/frame_0003.jpg 25
|
||||
dslr/images/ruler/frame_0004.jpg 25
|
||||
dslr/images/ruler/frame_0005.jpg 25
|
||||
dslr/images/ruler/frame_0006.jpg 25
|
||||
dslr/images/ruler/frame_0007.jpg 25
|
||||
dslr/images/letter_tray/frame_0001.jpg 13
|
||||
dslr/images/letter_tray/frame_0002.jpg 13
|
||||
dslr/images/letter_tray/frame_0003.jpg 13
|
||||
dslr/images/letter_tray/frame_0004.jpg 13
|
||||
dslr/images/letter_tray/frame_0005.jpg 13
|
||||
dslr/images/letter_tray/frame_0006.jpg 13
|
||||
dslr/images/letter_tray/frame_0007.jpg 13
|
||||
dslr/images/letter_tray/frame_0008.jpg 13
|
||||
dslr/images/letter_tray/frame_0009.jpg 13
|
||||
dslr/images/letter_tray/frame_0010.jpg 13
|
||||
dslr/images/letter_tray/frame_0011.jpg 13
|
||||
dslr/images/letter_tray/frame_0012.jpg 13
|
||||
dslr/images/letter_tray/frame_0013.jpg 13
|
||||
dslr/images/letter_tray/frame_0014.jpg 13
|
||||
dslr/images/letter_tray/frame_0015.jpg 13
|
||||
dslr/images/letter_tray/frame_0016.jpg 13
|
||||
dslr/images/file_cabinet/frame_0001.jpg 9
|
||||
dslr/images/file_cabinet/frame_0002.jpg 9
|
||||
dslr/images/file_cabinet/frame_0003.jpg 9
|
||||
dslr/images/file_cabinet/frame_0004.jpg 9
|
||||
dslr/images/file_cabinet/frame_0005.jpg 9
|
||||
dslr/images/file_cabinet/frame_0006.jpg 9
|
||||
dslr/images/file_cabinet/frame_0007.jpg 9
|
||||
dslr/images/file_cabinet/frame_0008.jpg 9
|
||||
dslr/images/file_cabinet/frame_0009.jpg 9
|
||||
dslr/images/file_cabinet/frame_0010.jpg 9
|
||||
dslr/images/file_cabinet/frame_0011.jpg 9
|
||||
dslr/images/file_cabinet/frame_0012.jpg 9
|
||||
dslr/images/file_cabinet/frame_0013.jpg 9
|
||||
dslr/images/file_cabinet/frame_0014.jpg 9
|
||||
dslr/images/file_cabinet/frame_0015.jpg 9
|
||||
dslr/images/phone/frame_0001.jpg 20
|
||||
dslr/images/phone/frame_0002.jpg 20
|
||||
dslr/images/phone/frame_0003.jpg 20
|
||||
dslr/images/phone/frame_0004.jpg 20
|
||||
dslr/images/phone/frame_0005.jpg 20
|
||||
dslr/images/phone/frame_0006.jpg 20
|
||||
dslr/images/phone/frame_0007.jpg 20
|
||||
dslr/images/phone/frame_0008.jpg 20
|
||||
dslr/images/phone/frame_0009.jpg 20
|
||||
dslr/images/phone/frame_0010.jpg 20
|
||||
dslr/images/phone/frame_0011.jpg 20
|
||||
dslr/images/phone/frame_0012.jpg 20
|
||||
dslr/images/phone/frame_0013.jpg 20
|
||||
dslr/images/bookcase/frame_0001.jpg 3
|
||||
dslr/images/bookcase/frame_0002.jpg 3
|
||||
dslr/images/bookcase/frame_0003.jpg 3
|
||||
dslr/images/bookcase/frame_0004.jpg 3
|
||||
dslr/images/bookcase/frame_0005.jpg 3
|
||||
dslr/images/bookcase/frame_0006.jpg 3
|
||||
dslr/images/bookcase/frame_0007.jpg 3
|
||||
dslr/images/bookcase/frame_0008.jpg 3
|
||||
dslr/images/bookcase/frame_0009.jpg 3
|
||||
dslr/images/bookcase/frame_0010.jpg 3
|
||||
dslr/images/bookcase/frame_0011.jpg 3
|
||||
dslr/images/bookcase/frame_0012.jpg 3
|
||||
dslr/images/projector/frame_0001.jpg 22
|
||||
dslr/images/projector/frame_0002.jpg 22
|
||||
dslr/images/projector/frame_0003.jpg 22
|
||||
dslr/images/projector/frame_0004.jpg 22
|
||||
dslr/images/projector/frame_0005.jpg 22
|
||||
dslr/images/projector/frame_0006.jpg 22
|
||||
dslr/images/projector/frame_0007.jpg 22
|
||||
dslr/images/projector/frame_0008.jpg 22
|
||||
dslr/images/projector/frame_0009.jpg 22
|
||||
dslr/images/projector/frame_0010.jpg 22
|
||||
dslr/images/projector/frame_0011.jpg 22
|
||||
dslr/images/projector/frame_0012.jpg 22
|
||||
dslr/images/projector/frame_0013.jpg 22
|
||||
dslr/images/projector/frame_0014.jpg 22
|
||||
dslr/images/projector/frame_0015.jpg 22
|
||||
dslr/images/projector/frame_0016.jpg 22
|
||||
dslr/images/projector/frame_0017.jpg 22
|
||||
dslr/images/projector/frame_0018.jpg 22
|
||||
dslr/images/projector/frame_0019.jpg 22
|
||||
dslr/images/projector/frame_0020.jpg 22
|
||||
dslr/images/projector/frame_0021.jpg 22
|
||||
dslr/images/projector/frame_0022.jpg 22
|
||||
dslr/images/projector/frame_0023.jpg 22
|
||||
dslr/images/stapler/frame_0001.jpg 28
|
||||
dslr/images/stapler/frame_0002.jpg 28
|
||||
dslr/images/stapler/frame_0003.jpg 28
|
||||
dslr/images/stapler/frame_0004.jpg 28
|
||||
dslr/images/stapler/frame_0005.jpg 28
|
||||
dslr/images/stapler/frame_0006.jpg 28
|
||||
dslr/images/stapler/frame_0007.jpg 28
|
||||
dslr/images/stapler/frame_0008.jpg 28
|
||||
dslr/images/stapler/frame_0009.jpg 28
|
||||
dslr/images/stapler/frame_0010.jpg 28
|
||||
dslr/images/stapler/frame_0011.jpg 28
|
||||
dslr/images/stapler/frame_0012.jpg 28
|
||||
dslr/images/stapler/frame_0013.jpg 28
|
||||
dslr/images/stapler/frame_0014.jpg 28
|
||||
dslr/images/stapler/frame_0015.jpg 28
|
||||
dslr/images/stapler/frame_0016.jpg 28
|
||||
dslr/images/stapler/frame_0017.jpg 28
|
||||
dslr/images/stapler/frame_0018.jpg 28
|
||||
dslr/images/stapler/frame_0019.jpg 28
|
||||
dslr/images/stapler/frame_0020.jpg 28
|
||||
dslr/images/stapler/frame_0021.jpg 28
|
||||
dslr/images/trash_can/frame_0001.jpg 30
|
||||
dslr/images/trash_can/frame_0002.jpg 30
|
||||
dslr/images/trash_can/frame_0003.jpg 30
|
||||
dslr/images/trash_can/frame_0004.jpg 30
|
||||
dslr/images/trash_can/frame_0005.jpg 30
|
||||
dslr/images/trash_can/frame_0006.jpg 30
|
||||
dslr/images/trash_can/frame_0007.jpg 30
|
||||
dslr/images/trash_can/frame_0008.jpg 30
|
||||
dslr/images/trash_can/frame_0009.jpg 30
|
||||
dslr/images/trash_can/frame_0010.jpg 30
|
||||
dslr/images/trash_can/frame_0011.jpg 30
|
||||
dslr/images/trash_can/frame_0012.jpg 30
|
||||
dslr/images/trash_can/frame_0013.jpg 30
|
||||
dslr/images/trash_can/frame_0014.jpg 30
|
||||
dslr/images/trash_can/frame_0015.jpg 30
|
||||
dslr/images/bike_helmet/frame_0001.jpg 2
|
||||
dslr/images/bike_helmet/frame_0002.jpg 2
|
||||
dslr/images/bike_helmet/frame_0003.jpg 2
|
||||
dslr/images/bike_helmet/frame_0004.jpg 2
|
||||
dslr/images/bike_helmet/frame_0005.jpg 2
|
||||
dslr/images/bike_helmet/frame_0006.jpg 2
|
||||
dslr/images/bike_helmet/frame_0007.jpg 2
|
||||
dslr/images/bike_helmet/frame_0008.jpg 2
|
||||
dslr/images/bike_helmet/frame_0009.jpg 2
|
||||
dslr/images/bike_helmet/frame_0010.jpg 2
|
||||
dslr/images/bike_helmet/frame_0011.jpg 2
|
||||
dslr/images/bike_helmet/frame_0012.jpg 2
|
||||
dslr/images/bike_helmet/frame_0013.jpg 2
|
||||
dslr/images/bike_helmet/frame_0014.jpg 2
|
||||
dslr/images/bike_helmet/frame_0015.jpg 2
|
||||
dslr/images/bike_helmet/frame_0016.jpg 2
|
||||
dslr/images/bike_helmet/frame_0017.jpg 2
|
||||
dslr/images/bike_helmet/frame_0018.jpg 2
|
||||
dslr/images/bike_helmet/frame_0019.jpg 2
|
||||
dslr/images/bike_helmet/frame_0020.jpg 2
|
||||
dslr/images/bike_helmet/frame_0021.jpg 2
|
||||
dslr/images/bike_helmet/frame_0022.jpg 2
|
||||
dslr/images/bike_helmet/frame_0023.jpg 2
|
||||
dslr/images/bike_helmet/frame_0024.jpg 2
|
||||
dslr/images/headphones/frame_0001.jpg 10
|
||||
dslr/images/headphones/frame_0002.jpg 10
|
||||
dslr/images/headphones/frame_0003.jpg 10
|
||||
dslr/images/headphones/frame_0004.jpg 10
|
||||
dslr/images/headphones/frame_0005.jpg 10
|
||||
dslr/images/headphones/frame_0006.jpg 10
|
||||
dslr/images/headphones/frame_0007.jpg 10
|
||||
dslr/images/headphones/frame_0008.jpg 10
|
||||
dslr/images/headphones/frame_0009.jpg 10
|
||||
dslr/images/headphones/frame_0010.jpg 10
|
||||
dslr/images/headphones/frame_0011.jpg 10
|
||||
dslr/images/headphones/frame_0012.jpg 10
|
||||
dslr/images/headphones/frame_0013.jpg 10
|
||||
dslr/images/desk_lamp/frame_0001.jpg 7
|
||||
dslr/images/desk_lamp/frame_0002.jpg 7
|
||||
dslr/images/desk_lamp/frame_0003.jpg 7
|
||||
dslr/images/desk_lamp/frame_0004.jpg 7
|
||||
dslr/images/desk_lamp/frame_0005.jpg 7
|
||||
dslr/images/desk_lamp/frame_0006.jpg 7
|
||||
dslr/images/desk_lamp/frame_0007.jpg 7
|
||||
dslr/images/desk_lamp/frame_0008.jpg 7
|
||||
dslr/images/desk_lamp/frame_0009.jpg 7
|
||||
dslr/images/desk_lamp/frame_0010.jpg 7
|
||||
dslr/images/desk_lamp/frame_0011.jpg 7
|
||||
dslr/images/desk_lamp/frame_0012.jpg 7
|
||||
dslr/images/desk_lamp/frame_0013.jpg 7
|
||||
dslr/images/desk_lamp/frame_0014.jpg 7
|
||||
dslr/images/desk_chair/frame_0001.jpg 6
|
||||
dslr/images/desk_chair/frame_0002.jpg 6
|
||||
dslr/images/desk_chair/frame_0003.jpg 6
|
||||
dslr/images/desk_chair/frame_0004.jpg 6
|
||||
dslr/images/desk_chair/frame_0005.jpg 6
|
||||
dslr/images/desk_chair/frame_0006.jpg 6
|
||||
dslr/images/desk_chair/frame_0007.jpg 6
|
||||
dslr/images/desk_chair/frame_0008.jpg 6
|
||||
dslr/images/desk_chair/frame_0009.jpg 6
|
||||
dslr/images/desk_chair/frame_0010.jpg 6
|
||||
dslr/images/desk_chair/frame_0011.jpg 6
|
||||
dslr/images/desk_chair/frame_0012.jpg 6
|
||||
dslr/images/desk_chair/frame_0013.jpg 6
|
||||
dslr/images/bottle/frame_0001.jpg 4
|
||||
dslr/images/bottle/frame_0002.jpg 4
|
||||
dslr/images/bottle/frame_0003.jpg 4
|
||||
dslr/images/bottle/frame_0004.jpg 4
|
||||
dslr/images/bottle/frame_0005.jpg 4
|
||||
dslr/images/bottle/frame_0006.jpg 4
|
||||
dslr/images/bottle/frame_0007.jpg 4
|
||||
dslr/images/bottle/frame_0008.jpg 4
|
||||
dslr/images/bottle/frame_0009.jpg 4
|
||||
dslr/images/bottle/frame_0010.jpg 4
|
||||
dslr/images/bottle/frame_0011.jpg 4
|
||||
dslr/images/bottle/frame_0012.jpg 4
|
||||
dslr/images/bottle/frame_0013.jpg 4
|
||||
dslr/images/bottle/frame_0014.jpg 4
|
||||
dslr/images/bottle/frame_0015.jpg 4
|
||||
dslr/images/bottle/frame_0016.jpg 4
|
|
@ -0,0 +1,795 @@
|
|||
webcam/images/calculator/frame_0001.jpg 5
|
||||
webcam/images/calculator/frame_0002.jpg 5
|
||||
webcam/images/calculator/frame_0003.jpg 5
|
||||
webcam/images/calculator/frame_0004.jpg 5
|
||||
webcam/images/calculator/frame_0005.jpg 5
|
||||
webcam/images/calculator/frame_0006.jpg 5
|
||||
webcam/images/calculator/frame_0007.jpg 5
|
||||
webcam/images/calculator/frame_0008.jpg 5
|
||||
webcam/images/calculator/frame_0009.jpg 5
|
||||
webcam/images/calculator/frame_0010.jpg 5
|
||||
webcam/images/calculator/frame_0011.jpg 5
|
||||
webcam/images/calculator/frame_0012.jpg 5
|
||||
webcam/images/calculator/frame_0013.jpg 5
|
||||
webcam/images/calculator/frame_0014.jpg 5
|
||||
webcam/images/calculator/frame_0015.jpg 5
|
||||
webcam/images/calculator/frame_0016.jpg 5
|
||||
webcam/images/calculator/frame_0017.jpg 5
|
||||
webcam/images/calculator/frame_0018.jpg 5
|
||||
webcam/images/calculator/frame_0019.jpg 5
|
||||
webcam/images/calculator/frame_0020.jpg 5
|
||||
webcam/images/calculator/frame_0021.jpg 5
|
||||
webcam/images/calculator/frame_0022.jpg 5
|
||||
webcam/images/calculator/frame_0023.jpg 5
|
||||
webcam/images/calculator/frame_0024.jpg 5
|
||||
webcam/images/calculator/frame_0025.jpg 5
|
||||
webcam/images/calculator/frame_0026.jpg 5
|
||||
webcam/images/calculator/frame_0027.jpg 5
|
||||
webcam/images/calculator/frame_0028.jpg 5
|
||||
webcam/images/calculator/frame_0029.jpg 5
|
||||
webcam/images/calculator/frame_0030.jpg 5
|
||||
webcam/images/calculator/frame_0031.jpg 5
|
||||
webcam/images/ring_binder/frame_0001.jpg 24
|
||||
webcam/images/ring_binder/frame_0002.jpg 24
|
||||
webcam/images/ring_binder/frame_0003.jpg 24
|
||||
webcam/images/ring_binder/frame_0004.jpg 24
|
||||
webcam/images/ring_binder/frame_0005.jpg 24
|
||||
webcam/images/ring_binder/frame_0006.jpg 24
|
||||
webcam/images/ring_binder/frame_0007.jpg 24
|
||||
webcam/images/ring_binder/frame_0008.jpg 24
|
||||
webcam/images/ring_binder/frame_0009.jpg 24
|
||||
webcam/images/ring_binder/frame_0010.jpg 24
|
||||
webcam/images/ring_binder/frame_0011.jpg 24
|
||||
webcam/images/ring_binder/frame_0012.jpg 24
|
||||
webcam/images/ring_binder/frame_0013.jpg 24
|
||||
webcam/images/ring_binder/frame_0014.jpg 24
|
||||
webcam/images/ring_binder/frame_0015.jpg 24
|
||||
webcam/images/ring_binder/frame_0016.jpg 24
|
||||
webcam/images/ring_binder/frame_0017.jpg 24
|
||||
webcam/images/ring_binder/frame_0018.jpg 24
|
||||
webcam/images/ring_binder/frame_0019.jpg 24
|
||||
webcam/images/ring_binder/frame_0020.jpg 24
|
||||
webcam/images/ring_binder/frame_0021.jpg 24
|
||||
webcam/images/ring_binder/frame_0022.jpg 24
|
||||
webcam/images/ring_binder/frame_0023.jpg 24
|
||||
webcam/images/ring_binder/frame_0024.jpg 24
|
||||
webcam/images/ring_binder/frame_0025.jpg 24
|
||||
webcam/images/ring_binder/frame_0026.jpg 24
|
||||
webcam/images/ring_binder/frame_0027.jpg 24
|
||||
webcam/images/ring_binder/frame_0028.jpg 24
|
||||
webcam/images/ring_binder/frame_0029.jpg 24
|
||||
webcam/images/ring_binder/frame_0030.jpg 24
|
||||
webcam/images/ring_binder/frame_0031.jpg 24
|
||||
webcam/images/ring_binder/frame_0032.jpg 24
|
||||
webcam/images/ring_binder/frame_0033.jpg 24
|
||||
webcam/images/ring_binder/frame_0034.jpg 24
|
||||
webcam/images/ring_binder/frame_0035.jpg 24
|
||||
webcam/images/ring_binder/frame_0036.jpg 24
|
||||
webcam/images/ring_binder/frame_0037.jpg 24
|
||||
webcam/images/ring_binder/frame_0038.jpg 24
|
||||
webcam/images/ring_binder/frame_0039.jpg 24
|
||||
webcam/images/ring_binder/frame_0040.jpg 24
|
||||
webcam/images/printer/frame_0001.jpg 21
|
||||
webcam/images/printer/frame_0002.jpg 21
|
||||
webcam/images/printer/frame_0003.jpg 21
|
||||
webcam/images/printer/frame_0004.jpg 21
|
||||
webcam/images/printer/frame_0005.jpg 21
|
||||
webcam/images/printer/frame_0006.jpg 21
|
||||
webcam/images/printer/frame_0007.jpg 21
|
||||
webcam/images/printer/frame_0008.jpg 21
|
||||
webcam/images/printer/frame_0009.jpg 21
|
||||
webcam/images/printer/frame_0010.jpg 21
|
||||
webcam/images/printer/frame_0011.jpg 21
|
||||
webcam/images/printer/frame_0012.jpg 21
|
||||
webcam/images/printer/frame_0013.jpg 21
|
||||
webcam/images/printer/frame_0014.jpg 21
|
||||
webcam/images/printer/frame_0015.jpg 21
|
||||
webcam/images/printer/frame_0016.jpg 21
|
||||
webcam/images/printer/frame_0017.jpg 21
|
||||
webcam/images/printer/frame_0018.jpg 21
|
||||
webcam/images/printer/frame_0019.jpg 21
|
||||
webcam/images/printer/frame_0020.jpg 21
|
||||
webcam/images/keyboard/frame_0001.jpg 11
|
||||
webcam/images/keyboard/frame_0002.jpg 11
|
||||
webcam/images/keyboard/frame_0003.jpg 11
|
||||
webcam/images/keyboard/frame_0004.jpg 11
|
||||
webcam/images/keyboard/frame_0005.jpg 11
|
||||
webcam/images/keyboard/frame_0006.jpg 11
|
||||
webcam/images/keyboard/frame_0007.jpg 11
|
||||
webcam/images/keyboard/frame_0008.jpg 11
|
||||
webcam/images/keyboard/frame_0009.jpg 11
|
||||
webcam/images/keyboard/frame_0010.jpg 11
|
||||
webcam/images/keyboard/frame_0011.jpg 11
|
||||
webcam/images/keyboard/frame_0012.jpg 11
|
||||
webcam/images/keyboard/frame_0013.jpg 11
|
||||
webcam/images/keyboard/frame_0014.jpg 11
|
||||
webcam/images/keyboard/frame_0015.jpg 11
|
||||
webcam/images/keyboard/frame_0016.jpg 11
|
||||
webcam/images/keyboard/frame_0017.jpg 11
|
||||
webcam/images/keyboard/frame_0018.jpg 11
|
||||
webcam/images/keyboard/frame_0019.jpg 11
|
||||
webcam/images/keyboard/frame_0020.jpg 11
|
||||
webcam/images/keyboard/frame_0021.jpg 11
|
||||
webcam/images/keyboard/frame_0022.jpg 11
|
||||
webcam/images/keyboard/frame_0023.jpg 11
|
||||
webcam/images/keyboard/frame_0024.jpg 11
|
||||
webcam/images/keyboard/frame_0025.jpg 11
|
||||
webcam/images/keyboard/frame_0026.jpg 11
|
||||
webcam/images/keyboard/frame_0027.jpg 11
|
||||
webcam/images/scissors/frame_0001.jpg 26
|
||||
webcam/images/scissors/frame_0002.jpg 26
|
||||
webcam/images/scissors/frame_0003.jpg 26
|
||||
webcam/images/scissors/frame_0004.jpg 26
|
||||
webcam/images/scissors/frame_0005.jpg 26
|
||||
webcam/images/scissors/frame_0006.jpg 26
|
||||
webcam/images/scissors/frame_0007.jpg 26
|
||||
webcam/images/scissors/frame_0008.jpg 26
|
||||
webcam/images/scissors/frame_0009.jpg 26
|
||||
webcam/images/scissors/frame_0010.jpg 26
|
||||
webcam/images/scissors/frame_0011.jpg 26
|
||||
webcam/images/scissors/frame_0012.jpg 26
|
||||
webcam/images/scissors/frame_0013.jpg 26
|
||||
webcam/images/scissors/frame_0014.jpg 26
|
||||
webcam/images/scissors/frame_0015.jpg 26
|
||||
webcam/images/scissors/frame_0016.jpg 26
|
||||
webcam/images/scissors/frame_0017.jpg 26
|
||||
webcam/images/scissors/frame_0018.jpg 26
|
||||
webcam/images/scissors/frame_0019.jpg 26
|
||||
webcam/images/scissors/frame_0020.jpg 26
|
||||
webcam/images/scissors/frame_0021.jpg 26
|
||||
webcam/images/scissors/frame_0022.jpg 26
|
||||
webcam/images/scissors/frame_0023.jpg 26
|
||||
webcam/images/scissors/frame_0024.jpg 26
|
||||
webcam/images/scissors/frame_0025.jpg 26
|
||||
webcam/images/laptop_computer/frame_0001.jpg 12
|
||||
webcam/images/laptop_computer/frame_0002.jpg 12
|
||||
webcam/images/laptop_computer/frame_0003.jpg 12
|
||||
webcam/images/laptop_computer/frame_0004.jpg 12
|
||||
webcam/images/laptop_computer/frame_0005.jpg 12
|
||||
webcam/images/laptop_computer/frame_0006.jpg 12
|
||||
webcam/images/laptop_computer/frame_0007.jpg 12
|
||||
webcam/images/laptop_computer/frame_0008.jpg 12
|
||||
webcam/images/laptop_computer/frame_0009.jpg 12
|
||||
webcam/images/laptop_computer/frame_0010.jpg 12
|
||||
webcam/images/laptop_computer/frame_0011.jpg 12
|
||||
webcam/images/laptop_computer/frame_0012.jpg 12
|
||||
webcam/images/laptop_computer/frame_0013.jpg 12
|
||||
webcam/images/laptop_computer/frame_0014.jpg 12
|
||||
webcam/images/laptop_computer/frame_0015.jpg 12
|
||||
webcam/images/laptop_computer/frame_0016.jpg 12
|
||||
webcam/images/laptop_computer/frame_0017.jpg 12
|
||||
webcam/images/laptop_computer/frame_0018.jpg 12
|
||||
webcam/images/laptop_computer/frame_0019.jpg 12
|
||||
webcam/images/laptop_computer/frame_0020.jpg 12
|
||||
webcam/images/laptop_computer/frame_0021.jpg 12
|
||||
webcam/images/laptop_computer/frame_0022.jpg 12
|
||||
webcam/images/laptop_computer/frame_0023.jpg 12
|
||||
webcam/images/laptop_computer/frame_0024.jpg 12
|
||||
webcam/images/laptop_computer/frame_0025.jpg 12
|
||||
webcam/images/laptop_computer/frame_0026.jpg 12
|
||||
webcam/images/laptop_computer/frame_0027.jpg 12
|
||||
webcam/images/laptop_computer/frame_0028.jpg 12
|
||||
webcam/images/laptop_computer/frame_0029.jpg 12
|
||||
webcam/images/laptop_computer/frame_0030.jpg 12
|
||||
webcam/images/mouse/frame_0001.jpg 16
|
||||
webcam/images/mouse/frame_0002.jpg 16
|
||||
webcam/images/mouse/frame_0003.jpg 16
|
||||
webcam/images/mouse/frame_0004.jpg 16
|
||||
webcam/images/mouse/frame_0005.jpg 16
|
||||
webcam/images/mouse/frame_0006.jpg 16
|
||||
webcam/images/mouse/frame_0007.jpg 16
|
||||
webcam/images/mouse/frame_0008.jpg 16
|
||||
webcam/images/mouse/frame_0009.jpg 16
|
||||
webcam/images/mouse/frame_0010.jpg 16
|
||||
webcam/images/mouse/frame_0011.jpg 16
|
||||
webcam/images/mouse/frame_0012.jpg 16
|
||||
webcam/images/mouse/frame_0013.jpg 16
|
||||
webcam/images/mouse/frame_0014.jpg 16
|
||||
webcam/images/mouse/frame_0015.jpg 16
|
||||
webcam/images/mouse/frame_0016.jpg 16
|
||||
webcam/images/mouse/frame_0017.jpg 16
|
||||
webcam/images/mouse/frame_0018.jpg 16
|
||||
webcam/images/mouse/frame_0019.jpg 16
|
||||
webcam/images/mouse/frame_0020.jpg 16
|
||||
webcam/images/mouse/frame_0021.jpg 16
|
||||
webcam/images/mouse/frame_0022.jpg 16
|
||||
webcam/images/mouse/frame_0023.jpg 16
|
||||
webcam/images/mouse/frame_0024.jpg 16
|
||||
webcam/images/mouse/frame_0025.jpg 16
|
||||
webcam/images/mouse/frame_0026.jpg 16
|
||||
webcam/images/mouse/frame_0027.jpg 16
|
||||
webcam/images/mouse/frame_0028.jpg 16
|
||||
webcam/images/mouse/frame_0029.jpg 16
|
||||
webcam/images/mouse/frame_0030.jpg 16
|
||||
webcam/images/monitor/frame_0001.jpg 15
|
||||
webcam/images/monitor/frame_0002.jpg 15
|
||||
webcam/images/monitor/frame_0003.jpg 15
|
||||
webcam/images/monitor/frame_0004.jpg 15
|
||||
webcam/images/monitor/frame_0005.jpg 15
|
||||
webcam/images/monitor/frame_0006.jpg 15
|
||||
webcam/images/monitor/frame_0007.jpg 15
|
||||
webcam/images/monitor/frame_0008.jpg 15
|
||||
webcam/images/monitor/frame_0009.jpg 15
|
||||
webcam/images/monitor/frame_0010.jpg 15
|
||||
webcam/images/monitor/frame_0011.jpg 15
|
||||
webcam/images/monitor/frame_0012.jpg 15
|
||||
webcam/images/monitor/frame_0013.jpg 15
|
||||
webcam/images/monitor/frame_0014.jpg 15
|
||||
webcam/images/monitor/frame_0015.jpg 15
|
||||
webcam/images/monitor/frame_0016.jpg 15
|
||||
webcam/images/monitor/frame_0017.jpg 15
|
||||
webcam/images/monitor/frame_0018.jpg 15
|
||||
webcam/images/monitor/frame_0019.jpg 15
|
||||
webcam/images/monitor/frame_0020.jpg 15
|
||||
webcam/images/monitor/frame_0021.jpg 15
|
||||
webcam/images/monitor/frame_0022.jpg 15
|
||||
webcam/images/monitor/frame_0023.jpg 15
|
||||
webcam/images/monitor/frame_0024.jpg 15
|
||||
webcam/images/monitor/frame_0025.jpg 15
|
||||
webcam/images/monitor/frame_0026.jpg 15
|
||||
webcam/images/monitor/frame_0027.jpg 15
|
||||
webcam/images/monitor/frame_0028.jpg 15
|
||||
webcam/images/monitor/frame_0029.jpg 15
|
||||
webcam/images/monitor/frame_0030.jpg 15
|
||||
webcam/images/monitor/frame_0031.jpg 15
|
||||
webcam/images/monitor/frame_0032.jpg 15
|
||||
webcam/images/monitor/frame_0033.jpg 15
|
||||
webcam/images/monitor/frame_0034.jpg 15
|
||||
webcam/images/monitor/frame_0035.jpg 15
|
||||
webcam/images/monitor/frame_0036.jpg 15
|
||||
webcam/images/monitor/frame_0037.jpg 15
|
||||
webcam/images/monitor/frame_0038.jpg 15
|
||||
webcam/images/monitor/frame_0039.jpg 15
|
||||
webcam/images/monitor/frame_0040.jpg 15
|
||||
webcam/images/monitor/frame_0041.jpg 15
|
||||
webcam/images/monitor/frame_0042.jpg 15
|
||||
webcam/images/monitor/frame_0043.jpg 15
|
||||
webcam/images/mug/frame_0001.jpg 17
|
||||
webcam/images/mug/frame_0002.jpg 17
|
||||
webcam/images/mug/frame_0003.jpg 17
|
||||
webcam/images/mug/frame_0004.jpg 17
|
||||
webcam/images/mug/frame_0005.jpg 17
|
||||
webcam/images/mug/frame_0006.jpg 17
|
||||
webcam/images/mug/frame_0007.jpg 17
|
||||
webcam/images/mug/frame_0008.jpg 17
|
||||
webcam/images/mug/frame_0009.jpg 17
|
||||
webcam/images/mug/frame_0010.jpg 17
|
||||
webcam/images/mug/frame_0011.jpg 17
|
||||
webcam/images/mug/frame_0012.jpg 17
|
||||
webcam/images/mug/frame_0013.jpg 17
|
||||
webcam/images/mug/frame_0014.jpg 17
|
||||
webcam/images/mug/frame_0015.jpg 17
|
||||
webcam/images/mug/frame_0016.jpg 17
|
||||
webcam/images/mug/frame_0017.jpg 17
|
||||
webcam/images/mug/frame_0018.jpg 17
|
||||
webcam/images/mug/frame_0019.jpg 17
|
||||
webcam/images/mug/frame_0020.jpg 17
|
||||
webcam/images/mug/frame_0021.jpg 17
|
||||
webcam/images/mug/frame_0022.jpg 17
|
||||
webcam/images/mug/frame_0023.jpg 17
|
||||
webcam/images/mug/frame_0024.jpg 17
|
||||
webcam/images/mug/frame_0025.jpg 17
|
||||
webcam/images/mug/frame_0026.jpg 17
|
||||
webcam/images/mug/frame_0027.jpg 17
|
||||
webcam/images/tape_dispenser/frame_0001.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0002.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0003.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0004.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0005.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0006.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0007.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0008.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0009.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0010.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0011.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0012.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0013.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0014.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0015.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0016.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0017.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0018.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0019.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0020.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0021.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0022.jpg 29
|
||||
webcam/images/tape_dispenser/frame_0023.jpg 29
|
||||
webcam/images/pen/frame_0001.jpg 19
|
||||
webcam/images/pen/frame_0002.jpg 19
|
||||
webcam/images/pen/frame_0003.jpg 19
|
||||
webcam/images/pen/frame_0004.jpg 19
|
||||
webcam/images/pen/frame_0005.jpg 19
|
||||
webcam/images/pen/frame_0006.jpg 19
|
||||
webcam/images/pen/frame_0007.jpg 19
|
||||
webcam/images/pen/frame_0008.jpg 19
|
||||
webcam/images/pen/frame_0009.jpg 19
|
||||
webcam/images/pen/frame_0010.jpg 19
|
||||
webcam/images/pen/frame_0011.jpg 19
|
||||
webcam/images/pen/frame_0012.jpg 19
|
||||
webcam/images/pen/frame_0013.jpg 19
|
||||
webcam/images/pen/frame_0014.jpg 19
|
||||
webcam/images/pen/frame_0015.jpg 19
|
||||
webcam/images/pen/frame_0016.jpg 19
|
||||
webcam/images/pen/frame_0017.jpg 19
|
||||
webcam/images/pen/frame_0018.jpg 19
|
||||
webcam/images/pen/frame_0019.jpg 19
|
||||
webcam/images/pen/frame_0020.jpg 19
|
||||
webcam/images/pen/frame_0021.jpg 19
|
||||
webcam/images/pen/frame_0022.jpg 19
|
||||
webcam/images/pen/frame_0023.jpg 19
|
||||
webcam/images/pen/frame_0024.jpg 19
|
||||
webcam/images/pen/frame_0025.jpg 19
|
||||
webcam/images/pen/frame_0026.jpg 19
|
||||
webcam/images/pen/frame_0027.jpg 19
|
||||
webcam/images/pen/frame_0028.jpg 19
|
||||
webcam/images/pen/frame_0029.jpg 19
|
||||
webcam/images/pen/frame_0030.jpg 19
|
||||
webcam/images/pen/frame_0031.jpg 19
|
||||
webcam/images/pen/frame_0032.jpg 19
|
||||
webcam/images/bike/frame_0001.jpg 1
|
||||
webcam/images/bike/frame_0002.jpg 1
|
||||
webcam/images/bike/frame_0003.jpg 1
|
||||
webcam/images/bike/frame_0004.jpg 1
|
||||
webcam/images/bike/frame_0005.jpg 1
|
||||
webcam/images/bike/frame_0006.jpg 1
|
||||
webcam/images/bike/frame_0007.jpg 1
|
||||
webcam/images/bike/frame_0008.jpg 1
|
||||
webcam/images/bike/frame_0009.jpg 1
|
||||
webcam/images/bike/frame_0010.jpg 1
|
||||
webcam/images/bike/frame_0011.jpg 1
|
||||
webcam/images/bike/frame_0012.jpg 1
|
||||
webcam/images/bike/frame_0013.jpg 1
|
||||
webcam/images/bike/frame_0014.jpg 1
|
||||
webcam/images/bike/frame_0015.jpg 1
|
||||
webcam/images/bike/frame_0016.jpg 1
|
||||
webcam/images/bike/frame_0017.jpg 1
|
||||
webcam/images/bike/frame_0018.jpg 1
|
||||
webcam/images/bike/frame_0019.jpg 1
|
||||
webcam/images/bike/frame_0020.jpg 1
|
||||
webcam/images/bike/frame_0021.jpg 1
|
||||
webcam/images/punchers/frame_0001.jpg 23
|
||||
webcam/images/punchers/frame_0002.jpg 23
|
||||
webcam/images/punchers/frame_0003.jpg 23
|
||||
webcam/images/punchers/frame_0004.jpg 23
|
||||
webcam/images/punchers/frame_0005.jpg 23
|
||||
webcam/images/punchers/frame_0006.jpg 23
|
||||
webcam/images/punchers/frame_0007.jpg 23
|
||||
webcam/images/punchers/frame_0008.jpg 23
|
||||
webcam/images/punchers/frame_0009.jpg 23
|
||||
webcam/images/punchers/frame_0010.jpg 23
|
||||
webcam/images/punchers/frame_0011.jpg 23
|
||||
webcam/images/punchers/frame_0012.jpg 23
|
||||
webcam/images/punchers/frame_0013.jpg 23
|
||||
webcam/images/punchers/frame_0014.jpg 23
|
||||
webcam/images/punchers/frame_0015.jpg 23
|
||||
webcam/images/punchers/frame_0016.jpg 23
|
||||
webcam/images/punchers/frame_0017.jpg 23
|
||||
webcam/images/punchers/frame_0018.jpg 23
|
||||
webcam/images/punchers/frame_0019.jpg 23
|
||||
webcam/images/punchers/frame_0020.jpg 23
|
||||
webcam/images/punchers/frame_0021.jpg 23
|
||||
webcam/images/punchers/frame_0022.jpg 23
|
||||
webcam/images/punchers/frame_0023.jpg 23
|
||||
webcam/images/punchers/frame_0024.jpg 23
|
||||
webcam/images/punchers/frame_0025.jpg 23
|
||||
webcam/images/punchers/frame_0026.jpg 23
|
||||
webcam/images/punchers/frame_0027.jpg 23
|
||||
webcam/images/back_pack/frame_0001.jpg 0
|
||||
webcam/images/back_pack/frame_0002.jpg 0
|
||||
webcam/images/back_pack/frame_0003.jpg 0
|
||||
webcam/images/back_pack/frame_0004.jpg 0
|
||||
webcam/images/back_pack/frame_0005.jpg 0
|
||||
webcam/images/back_pack/frame_0006.jpg 0
|
||||
webcam/images/back_pack/frame_0007.jpg 0
|
||||
webcam/images/back_pack/frame_0008.jpg 0
|
||||
webcam/images/back_pack/frame_0009.jpg 0
|
||||
webcam/images/back_pack/frame_0010.jpg 0
|
||||
webcam/images/back_pack/frame_0011.jpg 0
|
||||
webcam/images/back_pack/frame_0012.jpg 0
|
||||
webcam/images/back_pack/frame_0013.jpg 0
|
||||
webcam/images/back_pack/frame_0014.jpg 0
|
||||
webcam/images/back_pack/frame_0015.jpg 0
|
||||
webcam/images/back_pack/frame_0016.jpg 0
|
||||
webcam/images/back_pack/frame_0017.jpg 0
|
||||
webcam/images/back_pack/frame_0018.jpg 0
|
||||
webcam/images/back_pack/frame_0019.jpg 0
|
||||
webcam/images/back_pack/frame_0020.jpg 0
|
||||
webcam/images/back_pack/frame_0021.jpg 0
|
||||
webcam/images/back_pack/frame_0022.jpg 0
|
||||
webcam/images/back_pack/frame_0023.jpg 0
|
||||
webcam/images/back_pack/frame_0024.jpg 0
|
||||
webcam/images/back_pack/frame_0025.jpg 0
|
||||
webcam/images/back_pack/frame_0026.jpg 0
|
||||
webcam/images/back_pack/frame_0027.jpg 0
|
||||
webcam/images/back_pack/frame_0028.jpg 0
|
||||
webcam/images/back_pack/frame_0029.jpg 0
|
||||
webcam/images/desktop_computer/frame_0001.jpg 8
|
||||
webcam/images/desktop_computer/frame_0002.jpg 8
|
||||
webcam/images/desktop_computer/frame_0003.jpg 8
|
||||
webcam/images/desktop_computer/frame_0004.jpg 8
|
||||
webcam/images/desktop_computer/frame_0005.jpg 8
|
||||
webcam/images/desktop_computer/frame_0006.jpg 8
|
||||
webcam/images/desktop_computer/frame_0007.jpg 8
|
||||
webcam/images/desktop_computer/frame_0008.jpg 8
|
||||
webcam/images/desktop_computer/frame_0009.jpg 8
|
||||
webcam/images/desktop_computer/frame_0010.jpg 8
|
||||
webcam/images/desktop_computer/frame_0011.jpg 8
|
||||
webcam/images/desktop_computer/frame_0012.jpg 8
|
||||
webcam/images/desktop_computer/frame_0013.jpg 8
|
||||
webcam/images/desktop_computer/frame_0014.jpg 8
|
||||
webcam/images/desktop_computer/frame_0015.jpg 8
|
||||
webcam/images/desktop_computer/frame_0016.jpg 8
|
||||
webcam/images/desktop_computer/frame_0017.jpg 8
|
||||
webcam/images/desktop_computer/frame_0018.jpg 8
|
||||
webcam/images/desktop_computer/frame_0019.jpg 8
|
||||
webcam/images/desktop_computer/frame_0020.jpg 8
|
||||
webcam/images/desktop_computer/frame_0021.jpg 8
|
||||
webcam/images/speaker/frame_0001.jpg 27
|
||||
webcam/images/speaker/frame_0002.jpg 27
|
||||
webcam/images/speaker/frame_0003.jpg 27
|
||||
webcam/images/speaker/frame_0004.jpg 27
|
||||
webcam/images/speaker/frame_0005.jpg 27
|
||||
webcam/images/speaker/frame_0006.jpg 27
|
||||
webcam/images/speaker/frame_0007.jpg 27
|
||||
webcam/images/speaker/frame_0008.jpg 27
|
||||
webcam/images/speaker/frame_0009.jpg 27
|
||||
webcam/images/speaker/frame_0010.jpg 27
|
||||
webcam/images/speaker/frame_0011.jpg 27
|
||||
webcam/images/speaker/frame_0012.jpg 27
|
||||
webcam/images/speaker/frame_0013.jpg 27
|
||||
webcam/images/speaker/frame_0014.jpg 27
|
||||
webcam/images/speaker/frame_0015.jpg 27
|
||||
webcam/images/speaker/frame_0016.jpg 27
|
||||
webcam/images/speaker/frame_0017.jpg 27
|
||||
webcam/images/speaker/frame_0018.jpg 27
|
||||
webcam/images/speaker/frame_0019.jpg 27
|
||||
webcam/images/speaker/frame_0020.jpg 27
|
||||
webcam/images/speaker/frame_0021.jpg 27
|
||||
webcam/images/speaker/frame_0022.jpg 27
|
||||
webcam/images/speaker/frame_0023.jpg 27
|
||||
webcam/images/speaker/frame_0024.jpg 27
|
||||
webcam/images/speaker/frame_0025.jpg 27
|
||||
webcam/images/speaker/frame_0026.jpg 27
|
||||
webcam/images/speaker/frame_0027.jpg 27
|
||||
webcam/images/speaker/frame_0028.jpg 27
|
||||
webcam/images/speaker/frame_0029.jpg 27
|
||||
webcam/images/speaker/frame_0030.jpg 27
|
||||
webcam/images/mobile_phone/frame_0001.jpg 14
|
||||
webcam/images/mobile_phone/frame_0002.jpg 14
|
||||
webcam/images/mobile_phone/frame_0003.jpg 14
|
||||
webcam/images/mobile_phone/frame_0004.jpg 14
|
||||
webcam/images/mobile_phone/frame_0005.jpg 14
|
||||
webcam/images/mobile_phone/frame_0006.jpg 14
|
||||
webcam/images/mobile_phone/frame_0007.jpg 14
|
||||
webcam/images/mobile_phone/frame_0008.jpg 14
|
||||
webcam/images/mobile_phone/frame_0009.jpg 14
|
||||
webcam/images/mobile_phone/frame_0010.jpg 14
|
||||
webcam/images/mobile_phone/frame_0011.jpg 14
|
||||
webcam/images/mobile_phone/frame_0012.jpg 14
|
||||
webcam/images/mobile_phone/frame_0013.jpg 14
|
||||
webcam/images/mobile_phone/frame_0014.jpg 14
|
||||
webcam/images/mobile_phone/frame_0015.jpg 14
|
||||
webcam/images/mobile_phone/frame_0016.jpg 14
|
||||
webcam/images/mobile_phone/frame_0017.jpg 14
|
||||
webcam/images/mobile_phone/frame_0018.jpg 14
|
||||
webcam/images/mobile_phone/frame_0019.jpg 14
|
||||
webcam/images/mobile_phone/frame_0020.jpg 14
|
||||
webcam/images/mobile_phone/frame_0021.jpg 14
|
||||
webcam/images/mobile_phone/frame_0022.jpg 14
|
||||
webcam/images/mobile_phone/frame_0023.jpg 14
|
||||
webcam/images/mobile_phone/frame_0024.jpg 14
|
||||
webcam/images/mobile_phone/frame_0025.jpg 14
|
||||
webcam/images/mobile_phone/frame_0026.jpg 14
|
||||
webcam/images/mobile_phone/frame_0027.jpg 14
|
||||
webcam/images/mobile_phone/frame_0028.jpg 14
|
||||
webcam/images/mobile_phone/frame_0029.jpg 14
|
||||
webcam/images/mobile_phone/frame_0030.jpg 14
|
||||
webcam/images/paper_notebook/frame_0001.jpg 18
|
||||
webcam/images/paper_notebook/frame_0002.jpg 18
|
||||
webcam/images/paper_notebook/frame_0003.jpg 18
|
||||
webcam/images/paper_notebook/frame_0004.jpg 18
|
||||
webcam/images/paper_notebook/frame_0005.jpg 18
|
||||
webcam/images/paper_notebook/frame_0006.jpg 18
|
||||
webcam/images/paper_notebook/frame_0007.jpg 18
|
||||
webcam/images/paper_notebook/frame_0008.jpg 18
|
||||
webcam/images/paper_notebook/frame_0009.jpg 18
|
||||
webcam/images/paper_notebook/frame_0010.jpg 18
|
||||
webcam/images/paper_notebook/frame_0011.jpg 18
|
||||
webcam/images/paper_notebook/frame_0012.jpg 18
|
||||
webcam/images/paper_notebook/frame_0013.jpg 18
|
||||
webcam/images/paper_notebook/frame_0014.jpg 18
|
||||
webcam/images/paper_notebook/frame_0015.jpg 18
|
||||
webcam/images/paper_notebook/frame_0016.jpg 18
|
||||
webcam/images/paper_notebook/frame_0017.jpg 18
|
||||
webcam/images/paper_notebook/frame_0018.jpg 18
|
||||
webcam/images/paper_notebook/frame_0019.jpg 18
|
||||
webcam/images/paper_notebook/frame_0020.jpg 18
|
||||
webcam/images/paper_notebook/frame_0021.jpg 18
|
||||
webcam/images/paper_notebook/frame_0022.jpg 18
|
||||
webcam/images/paper_notebook/frame_0023.jpg 18
|
||||
webcam/images/paper_notebook/frame_0024.jpg 18
|
||||
webcam/images/paper_notebook/frame_0025.jpg 18
|
||||
webcam/images/paper_notebook/frame_0026.jpg 18
|
||||
webcam/images/paper_notebook/frame_0027.jpg 18
|
||||
webcam/images/paper_notebook/frame_0028.jpg 18
|
||||
webcam/images/ruler/frame_0001.jpg 25
|
||||
webcam/images/ruler/frame_0002.jpg 25
|
||||
webcam/images/ruler/frame_0003.jpg 25
|
||||
webcam/images/ruler/frame_0004.jpg 25
|
||||
webcam/images/ruler/frame_0005.jpg 25
|
||||
webcam/images/ruler/frame_0006.jpg 25
|
||||
webcam/images/ruler/frame_0007.jpg 25
|
||||
webcam/images/ruler/frame_0008.jpg 25
|
||||
webcam/images/ruler/frame_0009.jpg 25
|
||||
webcam/images/ruler/frame_0010.jpg 25
|
||||
webcam/images/ruler/frame_0011.jpg 25
|
||||
webcam/images/letter_tray/frame_0001.jpg 13
|
||||
webcam/images/letter_tray/frame_0002.jpg 13
|
||||
webcam/images/letter_tray/frame_0003.jpg 13
|
||||
webcam/images/letter_tray/frame_0004.jpg 13
|
||||
webcam/images/letter_tray/frame_0005.jpg 13
|
||||
webcam/images/letter_tray/frame_0006.jpg 13
|
||||
webcam/images/letter_tray/frame_0007.jpg 13
|
||||
webcam/images/letter_tray/frame_0008.jpg 13
|
||||
webcam/images/letter_tray/frame_0009.jpg 13
|
||||
webcam/images/letter_tray/frame_0010.jpg 13
|
||||
webcam/images/letter_tray/frame_0011.jpg 13
|
||||
webcam/images/letter_tray/frame_0012.jpg 13
|
||||
webcam/images/letter_tray/frame_0013.jpg 13
|
||||
webcam/images/letter_tray/frame_0014.jpg 13
|
||||
webcam/images/letter_tray/frame_0015.jpg 13
|
||||
webcam/images/letter_tray/frame_0016.jpg 13
|
||||
webcam/images/letter_tray/frame_0017.jpg 13
|
||||
webcam/images/letter_tray/frame_0018.jpg 13
|
||||
webcam/images/letter_tray/frame_0019.jpg 13
|
||||
webcam/images/file_cabinet/frame_0001.jpg 9
|
||||
webcam/images/file_cabinet/frame_0002.jpg 9
|
||||
webcam/images/file_cabinet/frame_0003.jpg 9
|
||||
webcam/images/file_cabinet/frame_0004.jpg 9
|
||||
webcam/images/file_cabinet/frame_0005.jpg 9
|
||||
webcam/images/file_cabinet/frame_0006.jpg 9
|
||||
webcam/images/file_cabinet/frame_0007.jpg 9
|
||||
webcam/images/file_cabinet/frame_0008.jpg 9
|
||||
webcam/images/file_cabinet/frame_0009.jpg 9
|
||||
webcam/images/file_cabinet/frame_0010.jpg 9
|
||||
webcam/images/file_cabinet/frame_0011.jpg 9
|
||||
webcam/images/file_cabinet/frame_0012.jpg 9
|
||||
webcam/images/file_cabinet/frame_0013.jpg 9
|
||||
webcam/images/file_cabinet/frame_0014.jpg 9
|
||||
webcam/images/file_cabinet/frame_0015.jpg 9
|
||||
webcam/images/file_cabinet/frame_0016.jpg 9
|
||||
webcam/images/file_cabinet/frame_0017.jpg 9
|
||||
webcam/images/file_cabinet/frame_0018.jpg 9
|
||||
webcam/images/file_cabinet/frame_0019.jpg 9
|
||||
webcam/images/phone/frame_0001.jpg 20
|
||||
webcam/images/phone/frame_0002.jpg 20
|
||||
webcam/images/phone/frame_0003.jpg 20
|
||||
webcam/images/phone/frame_0004.jpg 20
|
||||
webcam/images/phone/frame_0005.jpg 20
|
||||
webcam/images/phone/frame_0006.jpg 20
|
||||
webcam/images/phone/frame_0007.jpg 20
|
||||
webcam/images/phone/frame_0008.jpg 20
|
||||
webcam/images/phone/frame_0009.jpg 20
|
||||
webcam/images/phone/frame_0010.jpg 20
|
||||
webcam/images/phone/frame_0011.jpg 20
|
||||
webcam/images/phone/frame_0012.jpg 20
|
||||
webcam/images/phone/frame_0013.jpg 20
|
||||
webcam/images/phone/frame_0014.jpg 20
|
||||
webcam/images/phone/frame_0015.jpg 20
|
||||
webcam/images/phone/frame_0016.jpg 20
|
||||
webcam/images/bookcase/frame_0001.jpg 3
|
||||
webcam/images/bookcase/frame_0002.jpg 3
|
||||
webcam/images/bookcase/frame_0003.jpg 3
|
||||
webcam/images/bookcase/frame_0004.jpg 3
|
||||
webcam/images/bookcase/frame_0005.jpg 3
|
||||
webcam/images/bookcase/frame_0006.jpg 3
|
||||
webcam/images/bookcase/frame_0007.jpg 3
|
||||
webcam/images/bookcase/frame_0008.jpg 3
|
||||
webcam/images/bookcase/frame_0009.jpg 3
|
||||
webcam/images/bookcase/frame_0010.jpg 3
|
||||
webcam/images/bookcase/frame_0011.jpg 3
|
||||
webcam/images/bookcase/frame_0012.jpg 3
|
||||
webcam/images/projector/frame_0001.jpg 22
|
||||
webcam/images/projector/frame_0002.jpg 22
|
||||
webcam/images/projector/frame_0003.jpg 22
|
||||
webcam/images/projector/frame_0004.jpg 22
|
||||
webcam/images/projector/frame_0005.jpg 22
|
||||
webcam/images/projector/frame_0006.jpg 22
|
||||
webcam/images/projector/frame_0007.jpg 22
|
||||
webcam/images/projector/frame_0008.jpg 22
|
||||
webcam/images/projector/frame_0009.jpg 22
|
||||
webcam/images/projector/frame_0010.jpg 22
|
||||
webcam/images/projector/frame_0011.jpg 22
|
||||
webcam/images/projector/frame_0012.jpg 22
|
||||
webcam/images/projector/frame_0013.jpg 22
|
||||
webcam/images/projector/frame_0014.jpg 22
|
||||
webcam/images/projector/frame_0015.jpg 22
|
||||
webcam/images/projector/frame_0016.jpg 22
|
||||
webcam/images/projector/frame_0017.jpg 22
|
||||
webcam/images/projector/frame_0018.jpg 22
|
||||
webcam/images/projector/frame_0019.jpg 22
|
||||
webcam/images/projector/frame_0020.jpg 22
|
||||
webcam/images/projector/frame_0021.jpg 22
|
||||
webcam/images/projector/frame_0022.jpg 22
|
||||
webcam/images/projector/frame_0023.jpg 22
|
||||
webcam/images/projector/frame_0024.jpg 22
|
||||
webcam/images/projector/frame_0025.jpg 22
|
||||
webcam/images/projector/frame_0026.jpg 22
|
||||
webcam/images/projector/frame_0027.jpg 22
|
||||
webcam/images/projector/frame_0028.jpg 22
|
||||
webcam/images/projector/frame_0029.jpg 22
|
||||
webcam/images/projector/frame_0030.jpg 22
|
||||
webcam/images/stapler/frame_0001.jpg 28
|
||||
webcam/images/stapler/frame_0002.jpg 28
|
||||
webcam/images/stapler/frame_0003.jpg 28
|
||||
webcam/images/stapler/frame_0004.jpg 28
|
||||
webcam/images/stapler/frame_0005.jpg 28
|
||||
webcam/images/stapler/frame_0006.jpg 28
|
||||
webcam/images/stapler/frame_0007.jpg 28
|
||||
webcam/images/stapler/frame_0008.jpg 28
|
||||
webcam/images/stapler/frame_0009.jpg 28
|
||||
webcam/images/stapler/frame_0010.jpg 28
|
||||
webcam/images/stapler/frame_0011.jpg 28
|
||||
webcam/images/stapler/frame_0012.jpg 28
|
||||
webcam/images/stapler/frame_0013.jpg 28
|
||||
webcam/images/stapler/frame_0014.jpg 28
|
||||
webcam/images/stapler/frame_0015.jpg 28
|
||||
webcam/images/stapler/frame_0016.jpg 28
|
||||
webcam/images/stapler/frame_0017.jpg 28
|
||||
webcam/images/stapler/frame_0018.jpg 28
|
||||
webcam/images/stapler/frame_0019.jpg 28
|
||||
webcam/images/stapler/frame_0020.jpg 28
|
||||
webcam/images/stapler/frame_0021.jpg 28
|
||||
webcam/images/stapler/frame_0022.jpg 28
|
||||
webcam/images/stapler/frame_0023.jpg 28
|
||||
webcam/images/stapler/frame_0024.jpg 28
|
||||
webcam/images/trash_can/frame_0001.jpg 30
|
||||
webcam/images/trash_can/frame_0002.jpg 30
|
||||
webcam/images/trash_can/frame_0003.jpg 30
|
||||
webcam/images/trash_can/frame_0004.jpg 30
|
||||
webcam/images/trash_can/frame_0005.jpg 30
|
||||
webcam/images/trash_can/frame_0006.jpg 30
|
||||
webcam/images/trash_can/frame_0007.jpg 30
|
||||
webcam/images/trash_can/frame_0008.jpg 30
|
||||
webcam/images/trash_can/frame_0009.jpg 30
|
||||
webcam/images/trash_can/frame_0010.jpg 30
|
||||
webcam/images/trash_can/frame_0011.jpg 30
|
||||
webcam/images/trash_can/frame_0012.jpg 30
|
||||
webcam/images/trash_can/frame_0013.jpg 30
|
||||
webcam/images/trash_can/frame_0014.jpg 30
|
||||
webcam/images/trash_can/frame_0015.jpg 30
|
||||
webcam/images/trash_can/frame_0016.jpg 30
|
||||
webcam/images/trash_can/frame_0017.jpg 30
|
||||
webcam/images/trash_can/frame_0018.jpg 30
|
||||
webcam/images/trash_can/frame_0019.jpg 30
|
||||
webcam/images/trash_can/frame_0020.jpg 30
|
||||
webcam/images/trash_can/frame_0021.jpg 30
|
||||
webcam/images/bike_helmet/frame_0001.jpg 2
|
||||
webcam/images/bike_helmet/frame_0002.jpg 2
|
||||
webcam/images/bike_helmet/frame_0003.jpg 2
|
||||
webcam/images/bike_helmet/frame_0004.jpg 2
|
||||
webcam/images/bike_helmet/frame_0005.jpg 2
|
||||
webcam/images/bike_helmet/frame_0006.jpg 2
|
||||
webcam/images/bike_helmet/frame_0007.jpg 2
|
||||
webcam/images/bike_helmet/frame_0008.jpg 2
|
||||
webcam/images/bike_helmet/frame_0009.jpg 2
|
||||
webcam/images/bike_helmet/frame_0010.jpg 2
|
||||
webcam/images/bike_helmet/frame_0011.jpg 2
|
||||
webcam/images/bike_helmet/frame_0012.jpg 2
|
||||
webcam/images/bike_helmet/frame_0013.jpg 2
|
||||
webcam/images/bike_helmet/frame_0014.jpg 2
|
||||
webcam/images/bike_helmet/frame_0015.jpg 2
|
||||
webcam/images/bike_helmet/frame_0016.jpg 2
|
||||
webcam/images/bike_helmet/frame_0017.jpg 2
|
||||
webcam/images/bike_helmet/frame_0018.jpg 2
|
||||
webcam/images/bike_helmet/frame_0019.jpg 2
|
||||
webcam/images/bike_helmet/frame_0020.jpg 2
|
||||
webcam/images/bike_helmet/frame_0021.jpg 2
|
||||
webcam/images/bike_helmet/frame_0022.jpg 2
|
||||
webcam/images/bike_helmet/frame_0023.jpg 2
|
||||
webcam/images/bike_helmet/frame_0024.jpg 2
|
||||
webcam/images/bike_helmet/frame_0025.jpg 2
|
||||
webcam/images/bike_helmet/frame_0026.jpg 2
|
||||
webcam/images/bike_helmet/frame_0027.jpg 2
|
||||
webcam/images/bike_helmet/frame_0028.jpg 2
|
||||
webcam/images/headphones/frame_0001.jpg 10
|
||||
webcam/images/headphones/frame_0002.jpg 10
|
||||
webcam/images/headphones/frame_0003.jpg 10
|
||||
webcam/images/headphones/frame_0004.jpg 10
|
||||
webcam/images/headphones/frame_0005.jpg 10
|
||||
webcam/images/headphones/frame_0006.jpg 10
|
||||
webcam/images/headphones/frame_0007.jpg 10
|
||||
webcam/images/headphones/frame_0008.jpg 10
|
||||
webcam/images/headphones/frame_0009.jpg 10
|
||||
webcam/images/headphones/frame_0010.jpg 10
|
||||
webcam/images/headphones/frame_0011.jpg 10
|
||||
webcam/images/headphones/frame_0012.jpg 10
|
||||
webcam/images/headphones/frame_0013.jpg 10
|
||||
webcam/images/headphones/frame_0014.jpg 10
|
||||
webcam/images/headphones/frame_0015.jpg 10
|
||||
webcam/images/headphones/frame_0016.jpg 10
|
||||
webcam/images/headphones/frame_0017.jpg 10
|
||||
webcam/images/headphones/frame_0018.jpg 10
|
||||
webcam/images/headphones/frame_0019.jpg 10
|
||||
webcam/images/headphones/frame_0020.jpg 10
|
||||
webcam/images/headphones/frame_0021.jpg 10
|
||||
webcam/images/headphones/frame_0022.jpg 10
|
||||
webcam/images/headphones/frame_0023.jpg 10
|
||||
webcam/images/headphones/frame_0024.jpg 10
|
||||
webcam/images/headphones/frame_0025.jpg 10
|
||||
webcam/images/headphones/frame_0026.jpg 10
|
||||
webcam/images/headphones/frame_0027.jpg 10
|
||||
webcam/images/desk_lamp/frame_0001.jpg 7
|
||||
webcam/images/desk_lamp/frame_0002.jpg 7
|
||||
webcam/images/desk_lamp/frame_0003.jpg 7
|
||||
webcam/images/desk_lamp/frame_0004.jpg 7
|
||||
webcam/images/desk_lamp/frame_0005.jpg 7
|
||||
webcam/images/desk_lamp/frame_0006.jpg 7
|
||||
webcam/images/desk_lamp/frame_0007.jpg 7
|
||||
webcam/images/desk_lamp/frame_0008.jpg 7
|
||||
webcam/images/desk_lamp/frame_0009.jpg 7
|
||||
webcam/images/desk_lamp/frame_0010.jpg 7
|
||||
webcam/images/desk_lamp/frame_0011.jpg 7
|
||||
webcam/images/desk_lamp/frame_0012.jpg 7
|
||||
webcam/images/desk_lamp/frame_0013.jpg 7
|
||||
webcam/images/desk_lamp/frame_0014.jpg 7
|
||||
webcam/images/desk_lamp/frame_0015.jpg 7
|
||||
webcam/images/desk_lamp/frame_0016.jpg 7
|
||||
webcam/images/desk_lamp/frame_0017.jpg 7
|
||||
webcam/images/desk_lamp/frame_0018.jpg 7
|
||||
webcam/images/desk_chair/frame_0001.jpg 6
|
||||
webcam/images/desk_chair/frame_0002.jpg 6
|
||||
webcam/images/desk_chair/frame_0003.jpg 6
|
||||
webcam/images/desk_chair/frame_0004.jpg 6
|
||||
webcam/images/desk_chair/frame_0005.jpg 6
|
||||
webcam/images/desk_chair/frame_0006.jpg 6
|
||||
webcam/images/desk_chair/frame_0007.jpg 6
|
||||
webcam/images/desk_chair/frame_0008.jpg 6
|
||||
webcam/images/desk_chair/frame_0009.jpg 6
|
||||
webcam/images/desk_chair/frame_0010.jpg 6
|
||||
webcam/images/desk_chair/frame_0011.jpg 6
|
||||
webcam/images/desk_chair/frame_0012.jpg 6
|
||||
webcam/images/desk_chair/frame_0013.jpg 6
|
||||
webcam/images/desk_chair/frame_0014.jpg 6
|
||||
webcam/images/desk_chair/frame_0015.jpg 6
|
||||
webcam/images/desk_chair/frame_0016.jpg 6
|
||||
webcam/images/desk_chair/frame_0017.jpg 6
|
||||
webcam/images/desk_chair/frame_0018.jpg 6
|
||||
webcam/images/desk_chair/frame_0019.jpg 6
|
||||
webcam/images/desk_chair/frame_0020.jpg 6
|
||||
webcam/images/desk_chair/frame_0021.jpg 6
|
||||
webcam/images/desk_chair/frame_0022.jpg 6
|
||||
webcam/images/desk_chair/frame_0023.jpg 6
|
||||
webcam/images/desk_chair/frame_0024.jpg 6
|
||||
webcam/images/desk_chair/frame_0025.jpg 6
|
||||
webcam/images/desk_chair/frame_0026.jpg 6
|
||||
webcam/images/desk_chair/frame_0027.jpg 6
|
||||
webcam/images/desk_chair/frame_0028.jpg 6
|
||||
webcam/images/desk_chair/frame_0029.jpg 6
|
||||
webcam/images/desk_chair/frame_0030.jpg 6
|
||||
webcam/images/desk_chair/frame_0031.jpg 6
|
||||
webcam/images/desk_chair/frame_0032.jpg 6
|
||||
webcam/images/desk_chair/frame_0033.jpg 6
|
||||
webcam/images/desk_chair/frame_0034.jpg 6
|
||||
webcam/images/desk_chair/frame_0035.jpg 6
|
||||
webcam/images/desk_chair/frame_0036.jpg 6
|
||||
webcam/images/desk_chair/frame_0037.jpg 6
|
||||
webcam/images/desk_chair/frame_0038.jpg 6
|
||||
webcam/images/desk_chair/frame_0039.jpg 6
|
||||
webcam/images/desk_chair/frame_0040.jpg 6
|
||||
webcam/images/bottle/frame_0001.jpg 4
|
||||
webcam/images/bottle/frame_0002.jpg 4
|
||||
webcam/images/bottle/frame_0003.jpg 4
|
||||
webcam/images/bottle/frame_0004.jpg 4
|
||||
webcam/images/bottle/frame_0005.jpg 4
|
||||
webcam/images/bottle/frame_0006.jpg 4
|
||||
webcam/images/bottle/frame_0007.jpg 4
|
||||
webcam/images/bottle/frame_0008.jpg 4
|
||||
webcam/images/bottle/frame_0009.jpg 4
|
||||
webcam/images/bottle/frame_0010.jpg 4
|
||||
webcam/images/bottle/frame_0011.jpg 4
|
||||
webcam/images/bottle/frame_0012.jpg 4
|
||||
webcam/images/bottle/frame_0013.jpg 4
|
||||
webcam/images/bottle/frame_0014.jpg 4
|
||||
webcam/images/bottle/frame_0015.jpg 4
|
||||
webcam/images/bottle/frame_0016.jpg 4
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,234 @@
|
|||
#from __future__ import print_function, division
|
||||
|
||||
import scipy.stats
|
||||
import numpy as np
|
||||
import os
|
||||
import os.path
|
||||
import pickle
|
||||
import random
|
||||
import sys
|
||||
import time
|
||||
import torch
|
||||
from PIL import Image
|
||||
from torch.utils.data import Dataset
|
||||
import torch.nn as nn
|
||||
from torchvision import datasets, transforms
|
||||
|
||||
|
||||
def write_list(f, l):
|
||||
f.write(",".join(map(str, l)) + "\n")
|
||||
f.flush()
|
||||
|
||||
|
||||
def sample_ratios(samples, labels, ratios=None):
|
||||
if ratios is not None:
|
||||
selected_idx = []
|
||||
for i, ratio in enumerate(ratios):
|
||||
images_i = [j for j in range(
|
||||
labels.shape[0]) if labels[j].item() == i]
|
||||
num_ = len(images_i)
|
||||
idx = np.random.choice(
|
||||
num_, int(ratio * num_), replace=False)
|
||||
selected_idx.extend(np.array(images_i)[idx].tolist())
|
||||
return samples[selected_idx, :, :], labels[selected_idx]
|
||||
|
||||
return samples, labels
|
||||
|
||||
|
||||
def image_classification_test_loaded(test_samples, test_labels, model, test_10crop=True, device='cpu'):
|
||||
with torch.no_grad():
|
||||
test_loss = 0
|
||||
correct = 0
|
||||
if test_10crop:
|
||||
len_test = test_labels[0].shape[0]
|
||||
for i in range(len_test):
|
||||
outputs = []
|
||||
for j in range(10):
|
||||
data, target = test_samples[j][i].unsqueeze(
|
||||
0), test_labels[j][i].unsqueeze(0)
|
||||
_, output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
outputs.append(nn.Softmax(dim=1)(output))
|
||||
outputs = sum(outputs)
|
||||
pred = torch.max(outputs, 1)[1]
|
||||
correct += pred.eq(target.data.cpu().view_as(pred)
|
||||
).sum().item()
|
||||
else:
|
||||
len_test = test_labels.shape[0]
|
||||
bs = 72
|
||||
for i in range(int(len_test / bs)):
|
||||
data, target = torch.Tensor(
|
||||
test_samples[bs*i:bs*(i+1)]).to(device), test_labels[bs*i:bs*(i+1)]
|
||||
output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
pred = torch.max(output, 1)[1]
|
||||
correct += pred.eq(target.data.view_as(pred)).sum().item()
|
||||
# Last test samples
|
||||
data, target = torch.Tensor(
|
||||
test_samples[bs*(i+1):]).to(device), test_labels[bs*(i+1):]
|
||||
output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
pred = torch.max(output, 1)[1]
|
||||
correct += pred.eq(target.data.view_as(pred)).sum().item()
|
||||
accuracy = correct / len_test
|
||||
test_loss /= len_test
|
||||
return accuracy
|
||||
|
||||
|
||||
def make_dataset(image_list, labels, ratios=None):
|
||||
if labels:
|
||||
len_ = len(image_list)
|
||||
images = [(image_list[i].strip(), labels[i, :]) for i in range(len_)]
|
||||
else:
|
||||
if len(image_list[0].split()) > 2:
|
||||
images = [(val.split()[0], np.array([int(la) for la in val.split()[1:]])) for val in image_list]
|
||||
else:
|
||||
images = [(val.split()[0], int(val.split()[1])) for val in image_list]
|
||||
if ratios:
|
||||
selected_images = []
|
||||
for i, ratio in enumerate(ratios):
|
||||
images_i = [img for img in images if img[1] == i]
|
||||
num_ = len(images_i)
|
||||
idx = np.random.choice(num_, int(ratio * num_), replace=False)
|
||||
for j in idx:
|
||||
selected_images.append(images_i[j])
|
||||
return selected_images
|
||||
|
||||
return images
|
||||
|
||||
|
||||
def rgb_loader(path):
|
||||
with open(path, 'rb') as f:
|
||||
with Image.open(f) as img:
|
||||
return img.convert('RGB')
|
||||
|
||||
|
||||
def l_loader(path):
|
||||
with open(path, 'rb') as f:
|
||||
with Image.open(f) as img:
|
||||
return img.convert('L')
|
||||
|
||||
|
||||
def build_uspsmnist(l, path, root_folder, device='cpu'):
|
||||
dset_source = ImageList(open(l).readlines(), transform=transforms.Compose([
|
||||
transforms.Resize((28, 28)),
|
||||
transforms.ToTensor(),
|
||||
transforms.Normalize((0.5,), (0.5,))
|
||||
]), mode='L', root_folder=root_folder)
|
||||
loaded_dset_source = LoadedImageList(dset_source)
|
||||
with open(path, 'wb') as f:
|
||||
pickle.dump([loaded_dset_source.samples.numpy(),
|
||||
loaded_dset_source.targets.numpy()], f)
|
||||
return loaded_dset_source.samples.to(device
|
||||
), loaded_dset_source.targets.to(device)
|
||||
|
||||
|
||||
class ImageList(Dataset):
|
||||
def __init__(self, image_list, labels=None, transform=None, target_transform=None, mode='RGB', root_folder='', ratios=None):
|
||||
imgs = make_dataset(image_list, labels, ratios=ratios)
|
||||
if len(imgs) == 0:
|
||||
raise(RuntimeError("Found 0 images in subfolders of: " + root_folder + "\n"))
|
||||
|
||||
self.root_folder = root_folder
|
||||
self.imgs = imgs
|
||||
self.transform = transform
|
||||
self.target_transform = target_transform
|
||||
if mode == 'RGB':
|
||||
self.loader = rgb_loader
|
||||
elif mode == 'L':
|
||||
self.loader = l_loader
|
||||
|
||||
def __getitem__(self, index):
|
||||
path, target = self.imgs[index]
|
||||
img = self.loader(os.path.join(self.root_folder, path))
|
||||
if self.transform is not None:
|
||||
img = self.transform(img)
|
||||
if self.target_transform is not None:
|
||||
target = self.target_transform(target)
|
||||
return img, target
|
||||
|
||||
def __len__(self):
|
||||
return len(self.imgs)
|
||||
|
||||
|
||||
class LoadedImageList(Dataset):
|
||||
def __init__(self, image_list):
|
||||
|
||||
self.image_list = image_list
|
||||
self.samples, self.targets = self._load_imgs()
|
||||
|
||||
def _load_imgs(self):
|
||||
|
||||
loaded_images, targets = [], []
|
||||
t = time.time()
|
||||
print("{} samples to process".format(len(self.image_list.imgs)))
|
||||
for i, (path, target) in enumerate(self.image_list.imgs):
|
||||
if i % 1000 == 999:
|
||||
print("{} samples in {} seconds".format(i, time.time()- t))
|
||||
sys.stdout.flush()
|
||||
img = self.image_list.loader(os.path.join(self.image_list.root_folder, path))
|
||||
if self.image_list.transform is not None:
|
||||
img = self.image_list.transform(img)
|
||||
if self.image_list.target_transform is not None:
|
||||
target = self.image_list.target_transform(target)
|
||||
loaded_images.append(img)
|
||||
targets.append(target)
|
||||
|
||||
return torch.stack(loaded_images), torch.LongTensor(targets)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.image_list.imgs)
|
||||
|
||||
|
||||
# List of fractions used to produce the dataset in the Performance vs J_SD paragraph. Figures 1 and 2.
|
||||
# 50 elements, representing the fractions, and the theoretical JSD they would produce.
|
||||
subsampling = [[[0.4052, 0.2398, 0.7001, 0.7178, 0.2132, 0.4887, 0.9849, 0.814, 0.8186, 0.2134], 0.032760409197316966],
|
||||
[[0.7872, 0.6374, 0.8413, 0.3612, 0.4427, 0.5154, 0.8423, 0.6748, 0.1634, 0.4098], 0.021267958321062656],
|
||||
[[0.2789, 0.5613, 0.9585, 0.2165, 0.9446, 0.869, 0.838, 0.9575, 0.1829, 0.7934], 0.03455597584383796],
|
||||
[[0.5017, 0.4318, 0.6189, 0.1269, 0.8788, 0.4277, 0.3875, 0.5378, 0.5405, 0.8286], 0.022225051878141944],
|
||||
[[0.2642, 0.1912, 0.1133, 0.7818, 0.9773, 0.9555, 0.741, 0.6051, 0.2096, 0.8654], 0.05013218434133124],
|
||||
[[0.2005, 0.579, 0.5852, 0.814, 0.3936, 0.6541, 0.6803, 0.3873, 0.5198, 0.7423], 0.01487691279068412],
|
||||
[[0.6315, 0.9692, 0.5519, 0.317, 0.662, 0.9937, 0.577, 0.895, 0.2807, 0.5573], 0.01786388435441286],
|
||||
[[0.6593, 0.1823, 0.9087, 0.323, 0.117, 0.324, 0.9852, 0.8075, 0.9799, 0.1527], 0.05721295477923642],
|
||||
[[0.9851, 0.5631, 0.3203, 0.227, 0.5126, 0.3978, 0.5219, 0.9644, 0.427, 0.8089], 0.02346584399291991],
|
||||
[[0.9434, 0.4466, 0.8503, 0.7235, 0.405, 0.8225, 0.2699, 0.6799, 0.5503, 0.8828], 0.01554582865827399],
|
||||
[[0.7742, 0.9749, 0.895, 0.3821, 0.7929, 0.5722, 0.5327, 0.4774, 0.7049, 0.1087], 0.026333395357855234],
|
||||
[[0.474, 0.4977, 0.4882, 0.7236, 0.4274, 0.505, 0.1939, 0.6071, 0.7333, 0.6566], 0.012314910203020654],
|
||||
[[0.7469, 0.1428, 0.7743, 0.261, 0.509, 0.6714, 0.3558, 0.2003, 0.2123, 0.7144], 0.03758103804746866],
|
||||
[[0.6078, 0.9925, 0.6132, 0.5107, 0.7755, 0.266, 0.9181, 0.5503, 0.9753, 0.787], 0.014173062912087135],
|
||||
[[0.4534, 0.3197, 0.641, 0.4995, 0.8052, 0.5237, 0.6757, 0.6292, 0.8155, 0.8259], 0.009193941950225593],
|
||||
[[0.3339, 0.7985, 0.3739, 0.8049, 0.4443, 0.5783, 0.6133, 0.7278, 0.9127, 0.7342], 0.011946408195133998],
|
||||
[[0.513, 0.6891, 0.8503, 0.7699, 0.8338, 0.8842, 0.2925, 0.5097, 0.8512, 0.9673], 0.011943243518739478],
|
||||
[[0.2608, 0.6524, 0.8783, 0.5355, 0.8859, 0.325, 0.8223, 0.2732, 0.3384, 0.733], 0.024573748979216877],
|
||||
[[0.4179, 0.8885, 0.778, 0.5943, 0.4063, 0.7704, 0.3782, 0.3344, 0.784, 0.7756], 0.01417171679861576],
|
||||
[[0.721, 0.4303, 0.5113, 0.8499, 0.6216, 0.3463, 0.7363, 0.438, 0.2705, 0.6982], 0.013848046051355988],
|
||||
[[0.9136, 0.964, 0.9482, 0.7971, 0.4281, 0.4715, 0.8894, 0.272, 0.3951, 0.7778], 0.019194643091215248],
|
||||
[[0.4694, 0.5974, 0.6888, 0.7073, 0.5244, 0.5828, 0.4859, 0.8798, 0.6837, 0.3665], 0.006838157001020787],
|
||||
[[0.5397, 0.8727, 0.3284, 0.781, 0.3955, 0.88, 0.3357, 0.8478, 0.8832, 0.4495], 0.017877156258920744],
|
||||
[[0.4096, 0.6769, 0.3566, 0.3863, 0.6231, 0.7828, 0.5524, 0.5988, 0.8546, 0.3649], 0.011476193912275862],
|
||||
[[0.6329, 0.5326, 0.8094, 0.556, 0.2853, 0.2693, 0.4511, 0.6068, 0.4359, 0.7832], 0.014025816871972494],
|
||||
[[0.6955, 0.9107, 0.9266, 0.7358, 0.7268, 0.5914, 0.7524, 0.9486, 0.6223, 0.92], 0.003294516531428644],
|
||||
[[0.6221, 0.5749, 0.4356, 0.2805, 0.3879, 0.8183, 0.9777, 0.3194, 0.6435, 0.7227], 0.017521988550845864],
|
||||
[[0.7197, 0.4776, 0.9488, 0.9869, 0.6423, 0.3082, 0.4404, 0.7307, 0.9051, 0.5027], 0.014615851037701265],
|
||||
[[0.7597, 0.9354, 0.6758, 0.9101, 0.8474, 0.6598, 0.7983, 0.6623, 0.8925, 0.7054], 0.0020994981073005157],
|
||||
[[0.8551, 0.6381, 0.6828, 0.8635, 0.6717, 0.6722, 0.8073, 0.7905, 0.9169, 0.7565], 0.0017818497157099332],
|
||||
[[0.7233, 0.7575, 0.8205, 0.6374, 0.6596, 0.7792, 0.8736, 0.8413, 0.807, 0.7333], 0.0011553039532261685],
|
||||
[[0.8778, 0.9661, 0.6376, 0.9134, 0.8296, 0.7027, 0.977, 0.8485, 0.6357, 0.7192], 0.0029014496151538102],
|
||||
[[0.9774, 0.9047, 0.9077, 0.9301, 0.9926, 0.93, 0.9202, 0.9849, 0.9231, 0.9612], 0.0001353783157801754],
|
||||
[[0.6304, 0.1495, 0.8439, 0.3944, 0.8693, 0.1466, 0.5952, 0.5705, 0.5734, 0.9339], 0.0337553984425215],
|
||||
[[0.6239, 0.9361, 0.7968, 0.7829, 0.7501, 0.1326, 0.6832, 0.1769, 0.616, 0.319], 0.033912799130997845],
|
||||
[[0.1608, 0.1367, 0.797, 0.655, 0.3387, 0.539, 0.4643, 0.6889, 0.3485, 0.1786], 0.037503662696288936],
|
||||
[[0.7596, 0.6143, 0.1171, 0.4069, 0.5707, 0.2775, 0.6583, 0.1072, 0.9349, 0.3256], 0.044312727351805296],
|
||||
[[0.1518, 0.9251, 0.2583, 0.5876, 0.4173, 0.9602, 0.1169, 0.5076, 0.9346, 0.8579], 0.04662314442412397],
|
||||
[[0.1608, 0.9229, 0.3684, 0.9401, 0.2972, 0.8121, 0.332, 0.1203, 0.8746, 0.6694], 0.046668910811104636],
|
||||
[[0.9124, 0.7087, 0.5254, 0.3416, 0.6167, 0.243, 0.3906, 0.1258, 0.1402, 0.749], 0.04128706833470013],
|
||||
[[0.4195, 0.778, 0.2067, 0.7385, 0.2988, 0.1888, 0.1895, 0.2513, 0.7798, 0.2208], 0.041400745619836664],
|
||||
[[0.2199, 0.4556, 0.6841, 0.3873, 0.9961, 0.1154, 0.9678, 0.7335, 0.1152, 0.8384], 0.05217544485450189],
|
||||
[[0.2801, 0.2895, 0.2793, 0.83, 0.4832, 0.9456, 0.139, 0.402, 0.9296, 0.1218], 0.051751059034301244],
|
||||
[[0.1197, 0.2005, 0.5692, 0.1897, 0.9014, 0.2501, 0.6541, 0.2476, 0.9808, 0.9541], 0.056797377461948295],
|
||||
[[0.17, 0.1027, 0.5433, 0.9669, 0.1256, 0.6906, 0.6202, 0.3288, 0.7465, 0.738], 0.050916190636527255],
|
||||
[[0.893, 0.4175, 0.1314, 0.1503, 0.7415, 0.5234, 0.1702, 0.7846, 0.7492, 0.1314], 0.056894089148603375],
|
||||
[[0.8962, 0.1078, 0.951, 0.8255, 0.1396, 0.129, 0.2559, 0.7041, 0.9513, 0.9961], 0.06364930131498307],
|
||||
[[0.1053, 0.4421, 0.1935, 0.1014, 0.5312, 0.7668, 0.3904, 0.958, 0.1548, 0.7625], 0.061061160204041176],
|
||||
[[0.11, 0.2594, 0.6996, 0.7597, 0.1755, 0.1002, 0.1926, 0.4621, 0.9265, 0.8734], 0.06539390880498576],
|
||||
[[0.927, 0.9604, 0.1183, 0.3875, 0.134, 0.7651, 0.8139, 0.1396, 0.5639, 0.1171], 0.06938724191363951]]
|
|
@ -0,0 +1,245 @@
|
|||
import numpy as np
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from torch.autograd import Variable
|
||||
import math
|
||||
import torch.nn.functional as F
|
||||
import pdb
|
||||
|
||||
|
||||
def Entropy(input_):
|
||||
bs = input_.size(0)
|
||||
epsilon = 1e-5
|
||||
entropy = -input_ * torch.log(input_ + epsilon)
|
||||
entropy = torch.sum(entropy, dim=1)
|
||||
return entropy
|
||||
|
||||
|
||||
def grl_hook(coeff):
|
||||
def fun1(grad):
|
||||
return -coeff*grad.clone()
|
||||
return fun1
|
||||
|
||||
|
||||
def CDAN(input_list, ad_net, entropy=None, coeff=None, random_layer=None, weights=None, device='cuda'):
|
||||
softmax_output = input_list[1].detach()
|
||||
batch_size = softmax_output.size(0) // 2
|
||||
feature = input_list[0]
|
||||
if random_layer is None:
|
||||
op_out = torch.bmm(softmax_output.unsqueeze(2), feature.unsqueeze(1))
|
||||
ad_out = ad_net(
|
||||
op_out.view(-1, softmax_output.size(1) * feature.size(1)))
|
||||
else:
|
||||
random_out = random_layer.forward([feature, softmax_output])
|
||||
ad_out = ad_net(random_out.view(-1, random_out.size(1)))
|
||||
dc_target = torch.from_numpy(
|
||||
np.array([[1]] * batch_size + [[0]] * batch_size)).float().to(device)
|
||||
if entropy is not None:
|
||||
entropy.register_hook(grl_hook(coeff))
|
||||
entropy = 1.0+torch.exp(-entropy)
|
||||
source_mask = torch.ones_like(entropy)
|
||||
source_mask[feature.size(0)//2:] = 0
|
||||
source_weight = entropy*source_mask
|
||||
target_mask = torch.ones_like(entropy)
|
||||
target_mask[0:feature.size(0)//2] = 0
|
||||
target_weight = entropy*target_mask
|
||||
if weights is not None:
|
||||
weights = torch.cat((weights.squeeze(), torch.ones((batch_size)).to(device)))
|
||||
weight = (source_weight / torch.sum(source_weight).detach().item() +
|
||||
target_weight / torch.sum(target_weight).detach().item()) * weights
|
||||
else:
|
||||
weight = source_weight / torch.sum(source_weight).detach().item() + \
|
||||
target_weight / torch.sum(target_weight).detach().item()
|
||||
return torch.sum(weight.view(-1, 1) * nn.BCELoss(reduction='none')(ad_out, dc_target)) / torch.sum(weight).detach().item()
|
||||
else:
|
||||
if weights is not None:
|
||||
weighted_nll_source = - weights * torch.log(ad_out[:batch_size])
|
||||
nll_target = - torch.log(1 - ad_out[batch_size:])
|
||||
return (torch.mean(weighted_nll_source) + torch.mean(nll_target)) / 2
|
||||
|
||||
return nn.BCELoss()(ad_out, dc_target)
|
||||
|
||||
|
||||
def DANN(features, ad_net, device):
|
||||
ad_out = ad_net(features)
|
||||
batch_size = ad_out.size(0) // 2
|
||||
dc_target = torch.from_numpy(
|
||||
np.array([[1]] * batch_size + [[0]] * batch_size)).float().to(device)
|
||||
return nn.BCELoss()(ad_out, dc_target)
|
||||
|
||||
|
||||
def IWDAN(features, ad_net, weights):
|
||||
|
||||
# First batch_size elements of features correspond to source
|
||||
# Last batch_size elements of features correspond to target
|
||||
# Each element of ad_out represents the proba of the corresponding feature to be from the source domain
|
||||
# For importance sampling, it needs to be put to the log and multiplied by the weight of the corresponding class
|
||||
|
||||
ad_out = ad_net(features)
|
||||
batch_size = ad_out.size(0) // 2
|
||||
|
||||
weighted_nll_source = - weights * torch.log(ad_out[:batch_size])
|
||||
nll_target = - torch.log(1 - ad_out[batch_size:])
|
||||
|
||||
return (torch.mean(weighted_nll_source) + torch.mean(nll_target)) / 2
|
||||
|
||||
|
||||
def WDANN(features, ad_net, device, weights=None):
|
||||
ad_out = ad_net(features)
|
||||
batch_size = ad_out.size(0) // 2
|
||||
if weights is None:
|
||||
weighted_source = ad_out[:batch_size]
|
||||
else:
|
||||
weighted_source = ad_out[:batch_size] * weights
|
||||
dc_target = torch.from_numpy(
|
||||
np.array([[1]] * batch_size + [[-1]] * batch_size)).float().to(device)
|
||||
|
||||
# Gradient penalty
|
||||
alpha = torch.rand([batch_size, 1]).to(device)
|
||||
interpolates = (1 - alpha) * features[batch_size:] + alpha * features[:batch_size]
|
||||
interpolates = torch.autograd.Variable(interpolates, requires_grad=True)
|
||||
disc_interpolates = ad_net(interpolates)
|
||||
gradients = torch.autograd.grad(outputs=disc_interpolates, inputs=interpolates,
|
||||
grad_outputs=torch.ones(disc_interpolates.size()).to(device),
|
||||
create_graph=True, retain_graph=True, only_inputs=True)[0]
|
||||
gradients = gradients.view(gradients.size(0), -1)
|
||||
|
||||
# Lambda is 10 in the original WGAN-GP paper
|
||||
# return - torch.mean(dc_target * ad_out), ((gradients.norm(2, dim=1) - 1) ** 2).mean() * 10
|
||||
return - torch.mean(weighted_source - ad_out[batch_size:]) / 2, ((gradients.norm(2, dim=1) - 1) ** 2).mean() * 10
|
||||
|
||||
|
||||
def EntropyLoss(input_):
|
||||
mask = input_.ge(0.000001)
|
||||
mask_out = torch.masked_select(input_, mask)
|
||||
entropy = -(torch.sum(mask_out * torch.log(mask_out)))
|
||||
return entropy / float(input_.size(0))
|
||||
|
||||
|
||||
def gaussian_kernel(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
|
||||
n_samples = int(source.size()[0])+int(target.size()[0])
|
||||
total = torch.cat([source, target], dim=0)
|
||||
total0 = total.unsqueeze(0).expand(
|
||||
int(total.size(0)), int(total.size(0)), int(total.size(1)))
|
||||
total1 = total.unsqueeze(1).expand(
|
||||
int(total.size(0)), int(total.size(0)), int(total.size(1)))
|
||||
L2_distance = ((total0-total1)**2).sum(2)
|
||||
if fix_sigma:
|
||||
bandwidth = fix_sigma
|
||||
else:
|
||||
bandwidth = torch.sum(L2_distance.data) / (n_samples**2-n_samples)
|
||||
bandwidth /= kernel_mul ** (kernel_num // 2)
|
||||
bandwidth_list = [bandwidth * (kernel_mul**i) for i in range(kernel_num)]
|
||||
kernel_val = [torch.exp(-L2_distance / bandwidth_temp)
|
||||
for bandwidth_temp in bandwidth_list]
|
||||
return sum(kernel_val) # /len(kernel_val)
|
||||
|
||||
|
||||
def DAN(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
|
||||
batch_size = int(source.size()[0])
|
||||
kernels = gaussian_kernel(source, target,
|
||||
kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma)
|
||||
|
||||
loss1 = 0
|
||||
for s1 in range(batch_size):
|
||||
for s2 in range(s1+1, batch_size):
|
||||
t1, t2 = s1+batch_size, s2+batch_size
|
||||
loss1 += kernels[s1, s2] + kernels[t1, t2]
|
||||
loss1 = loss1 / float(batch_size * (batch_size - 1) / 2)
|
||||
|
||||
loss2 = 0
|
||||
for s1 in range(batch_size):
|
||||
for s2 in range(batch_size):
|
||||
t1, t2 = s1+batch_size, s2+batch_size
|
||||
loss2 -= kernels[s1, t2] + kernels[s2, t1]
|
||||
loss2 = loss2 / float(batch_size * batch_size)
|
||||
return loss1 + loss2
|
||||
|
||||
|
||||
def DAN_Linear(source, target, kernel_mul=2.0, kernel_num=5, fix_sigma=None):
|
||||
batch_size = int(source.size()[0])
|
||||
kernels = gaussian_kernel(source, target,
|
||||
kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma)
|
||||
|
||||
# Linear version
|
||||
loss = 0
|
||||
for i in range(batch_size):
|
||||
s1, s2 = i, (i+1) % batch_size
|
||||
t1, t2 = s1+batch_size, s2+batch_size
|
||||
loss += kernels[s1, s2] + kernels[t1, t2]
|
||||
loss -= kernels[s1, t2] + kernels[s2, t1]
|
||||
return loss / float(batch_size)
|
||||
|
||||
|
||||
def JAN(source_list, target_list, kernel_muls=[2.0, 2.0], kernel_nums=[5, 1], fix_sigma_list=[None, 1.68], weights=None):
|
||||
batch_size = int(source_list[0].size()[0])
|
||||
layer_num = len(source_list)
|
||||
joint_kernels = None
|
||||
for i in range(layer_num):
|
||||
source = source_list[i]
|
||||
target = target_list[i]
|
||||
kernel_mul = kernel_muls[i]
|
||||
kernel_num = kernel_nums[i]
|
||||
fix_sigma = fix_sigma_list[i]
|
||||
kernels = gaussian_kernel(source, target,
|
||||
kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma)
|
||||
if joint_kernels is not None:
|
||||
joint_kernels = joint_kernels * kernels
|
||||
else:
|
||||
joint_kernels = kernels
|
||||
|
||||
loss1 = 0
|
||||
if weights is None:
|
||||
mult = 1
|
||||
for s1 in range(batch_size):
|
||||
for s2 in range(s1 + 1, batch_size):
|
||||
t1, t2 = s1 + batch_size, s2 + batch_size
|
||||
if weights is not None:
|
||||
mult = weights[s1] * weights[s2]
|
||||
loss1 += mult * joint_kernels[s1, s2] + joint_kernels[t1, t2]
|
||||
loss1 = loss1 / float(batch_size * (batch_size - 1) / 2)
|
||||
|
||||
loss2 = 0
|
||||
if weights is None:
|
||||
mult1, mult2 = 1, 1
|
||||
for s1 in range(batch_size):
|
||||
if weights is not None:
|
||||
mult1 = weights[s1]
|
||||
for s2 in range(batch_size):
|
||||
t1, t2 = s1 + batch_size, s2 + batch_size
|
||||
if weights is not None:
|
||||
mult2 = weights[s2]
|
||||
loss2 -= mult1 * joint_kernels[s1, t2] + mult2 * joint_kernels[s2, t1]
|
||||
loss2 = loss2 / float(batch_size * batch_size)
|
||||
return loss1 + loss2
|
||||
|
||||
|
||||
def JAN_Linear(source_list, target_list, kernel_muls=[2.0, 2.0], kernel_nums=[5, 1], fix_sigma_list=[None, 1.68]):
|
||||
batch_size = int(source_list[0].size()[0])
|
||||
layer_num = len(source_list)
|
||||
joint_kernels = None
|
||||
for i in range(layer_num):
|
||||
source = source_list[i]
|
||||
target = target_list[i]
|
||||
kernel_mul = kernel_muls[i]
|
||||
kernel_num = kernel_nums[i]
|
||||
fix_sigma = fix_sigma_list[i]
|
||||
kernels = gaussian_kernel(source, target,
|
||||
kernel_mul=kernel_mul, kernel_num=kernel_num, fix_sigma=fix_sigma)
|
||||
if joint_kernels is not None:
|
||||
joint_kernels = joint_kernels * kernels
|
||||
else:
|
||||
joint_kernels = kernels
|
||||
|
||||
# Linear version
|
||||
loss = 0
|
||||
for i in range(batch_size):
|
||||
s1, s2 = i, (i+1) % batch_size
|
||||
t1, t2 = s1+batch_size, s2+batch_size
|
||||
loss += joint_kernels[s1, s2] + joint_kernels[t1, t2]
|
||||
loss -= joint_kernels[s1, t2] + joint_kernels[s2, t1]
|
||||
return loss / float(batch_size)
|
||||
|
||||
|
||||
loss_dict = {"DAN": DAN, "DAN_Linear": DAN_Linear, "JAN": JAN,
|
||||
"JAN_Linear": JAN_Linear, "IWJAN": JAN, "IWJANORACLE": JAN}
|
|
@ -0,0 +1,24 @@
|
|||
def inv_lr_scheduler(optimizer, iter_num, gamma, power, lr=0.001, weight_decay=0.0005):
|
||||
"""Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs."""
|
||||
lr = lr * (1 + gamma * iter_num) ** (-power)
|
||||
i=0
|
||||
for param_group in optimizer.param_groups:
|
||||
param_group['lr'] = lr * param_group['lr_mult']
|
||||
param_group['weight_decay'] = weight_decay * param_group['decay_mult']
|
||||
i+=1
|
||||
|
||||
return optimizer
|
||||
|
||||
def inv_lr_scheduler_mmd(param_lr, optimizer, iter_num, gamma, power, init_lr=0.001):
|
||||
"""Decay learning rate by a factor of 0.1 every lr_decay_epoch epochs."""
|
||||
lr = init_lr * (1 + gamma * iter_num) ** (-power)
|
||||
|
||||
i=0
|
||||
for param_group in optimizer.param_groups:
|
||||
param_group['lr'] = lr * param_lr[i]
|
||||
i+=1
|
||||
|
||||
return optimizer
|
||||
|
||||
|
||||
schedule_dict = {"inv": inv_lr_scheduler, "inv_mmd": inv_lr_scheduler_mmd}
|
|
@ -0,0 +1,595 @@
|
|||
import torch.nn.functional as F
|
||||
import logging
|
||||
import numpy as np
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torchvision
|
||||
from torchvision import models
|
||||
from torch.autograd import Variable
|
||||
import math
|
||||
import pdb
|
||||
from cvxopt import matrix, solvers
|
||||
solvers.options['show_progress'] = False
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def calc_coeff(iter_num, high=1.0, low=0.0, alpha=10.0, max_iter=10000.0):
|
||||
return np.float(2.0 * (high - low) / (1.0 + np.exp(-alpha*iter_num / max_iter)) - (high - low) + low)
|
||||
|
||||
|
||||
def init_weights(m):
|
||||
classname = m.__class__.__name__
|
||||
if classname.find('Conv2d') != -1 or classname.find('ConvTranspose2d') != -1:
|
||||
nn.init.kaiming_uniform_(m.weight)
|
||||
nn.init.zeros_(m.bias)
|
||||
elif classname.find('BatchNorm') != -1:
|
||||
nn.init.normal_(m.weight, 1.0, 0.02)
|
||||
nn.init.zeros_(m.bias)
|
||||
elif classname.find('Linear') != -1:
|
||||
nn.init.xavier_normal_(m.weight)
|
||||
nn.init.zeros_(m.bias)
|
||||
|
||||
|
||||
class RandomLayer(nn.Module):
|
||||
def __init__(self, input_dim_list=[], output_dim=1024):
|
||||
super(RandomLayer, self).__init__()
|
||||
self.input_num = len(input_dim_list)
|
||||
self.output_dim = output_dim
|
||||
self.random_matrix = [torch.randn(input_dim_list[i], output_dim) for i in range(self.input_num)]
|
||||
|
||||
def forward(self, input_list):
|
||||
return_list = [torch.mm(input_list[i], self.random_matrix[i]) for i in range(self.input_num)]
|
||||
return_tensor = return_list[0] / math.pow(float(self.output_dim), 1.0/len(return_list))
|
||||
for single in return_list[1:]:
|
||||
return_tensor = torch.mul(return_tensor, single)
|
||||
return return_tensor
|
||||
|
||||
def cuda(self):
|
||||
super(RandomLayer, self).cuda()
|
||||
self.random_matrix = [val.cuda() for val in self.random_matrix]
|
||||
|
||||
|
||||
class LRN(nn.Module):
|
||||
def __init__(self, local_size=1, alpha=1.0, beta=0.75, ACROSS_CHANNELS=True):
|
||||
super(LRN, self).__init__()
|
||||
self.ACROSS_CHANNELS = ACROSS_CHANNELS
|
||||
if ACROSS_CHANNELS:
|
||||
self.average=nn.AvgPool3d(kernel_size=(local_size, 1, 1),
|
||||
stride=1,
|
||||
padding=(int((local_size-1.0)/2), 0, 0))
|
||||
else:
|
||||
self.average=nn.AvgPool2d(kernel_size=local_size,
|
||||
stride=1,
|
||||
padding=int((local_size-1.0)/2))
|
||||
self.alpha = alpha
|
||||
self.beta = beta
|
||||
|
||||
def forward(self, x):
|
||||
if self.ACROSS_CHANNELS:
|
||||
div = x.pow(2).unsqueeze(1)
|
||||
div = self.average(div).squeeze(1)
|
||||
div = div.mul(self.alpha).add(1.0).pow(self.beta)
|
||||
else:
|
||||
div = x.pow(2)
|
||||
div = self.average(div)
|
||||
div = div.mul(self.alpha).add(1.0).pow(self.beta)
|
||||
x = x.div(div)
|
||||
return x
|
||||
|
||||
|
||||
class AlexNet(nn.Module):
|
||||
|
||||
def __init__(self, num_classes=1000):
|
||||
super(AlexNet, self).__init__()
|
||||
self.features = nn.Sequential(
|
||||
nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=0),
|
||||
nn.ReLU(inplace=True),
|
||||
LRN(local_size=5, alpha=0.0001, beta=0.75),
|
||||
nn.MaxPool2d(kernel_size=3, stride=2),
|
||||
nn.Conv2d(96, 256, kernel_size=5, padding=2, groups=2),
|
||||
nn.ReLU(inplace=True),
|
||||
LRN(local_size=5, alpha=0.0001, beta=0.75),
|
||||
nn.MaxPool2d(kernel_size=3, stride=2),
|
||||
nn.Conv2d(256, 384, kernel_size=3, padding=1),
|
||||
nn.ReLU(inplace=True),
|
||||
nn.Conv2d(384, 384, kernel_size=3, padding=1, groups=2),
|
||||
nn.ReLU(inplace=True),
|
||||
nn.Conv2d(384, 256, kernel_size=3, padding=1, groups=2),
|
||||
nn.ReLU(inplace=True),
|
||||
nn.MaxPool2d(kernel_size=3, stride=2),
|
||||
)
|
||||
self.classifier = nn.Sequential(
|
||||
nn.Linear(256 * 6 * 6, 4096),
|
||||
nn.ReLU(inplace=True),
|
||||
nn.Dropout(),
|
||||
nn.Linear(4096, 4096),
|
||||
nn.ReLU(inplace=True),
|
||||
nn.Dropout(),
|
||||
nn.Linear(4096, num_classes),
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.features(x)
|
||||
x = x.view(x.size(0), 256 * 6 * 6)
|
||||
x = self.classifier(x)
|
||||
return x
|
||||
|
||||
|
||||
def alexnet(pretrained=False, **kwargs):
|
||||
r"""AlexNet model architecture from the
|
||||
`"One weird trick..." <https://arxiv.org/abs/1404.5997>`_ paper.
|
||||
Args:
|
||||
pretrained (bool): If True, returns a model pre-trained on ImageNet
|
||||
"""
|
||||
model = AlexNet(**kwargs)
|
||||
if pretrained:
|
||||
model_path = './alexnet.pth.tar'
|
||||
pretrained_model = torch.load(model_path)
|
||||
model.load_state_dict(pretrained_model['state_dict'])
|
||||
return model
|
||||
|
||||
|
||||
# convnet without the last layer
|
||||
class AlexNetFc(nn.Module):
|
||||
def __init__(self, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000):
|
||||
super(AlexNetFc, self).__init__()
|
||||
model_alexnet = alexnet(pretrained=True)
|
||||
self.features = model_alexnet.features
|
||||
self.classifier = nn.Sequential()
|
||||
for i in range(6):
|
||||
self.classifier.add_module("classifier"+str(i), model_alexnet.classifier[i])
|
||||
self.feature_layers = nn.Sequential(self.features, self.classifier)
|
||||
|
||||
self.use_bottleneck = use_bottleneck
|
||||
self.new_cls = new_cls
|
||||
if new_cls:
|
||||
if self.use_bottleneck:
|
||||
self.bottleneck = nn.Linear(4096, bottleneck_dim)
|
||||
self.fc = nn.Linear(bottleneck_dim, class_num)
|
||||
self.bottleneck.apply(init_weights)
|
||||
self.fc.apply(init_weights)
|
||||
self.__in_features = bottleneck_dim
|
||||
else:
|
||||
self.fc = nn.Linear(4096, class_num)
|
||||
self.fc.apply(init_weights)
|
||||
self.__in_features = 4096
|
||||
else:
|
||||
self.fc = model_alexnet.classifier[6]
|
||||
self.__in_features = 4096
|
||||
|
||||
def forward(self, x):
|
||||
x = self.features(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
x = self.classifier(x)
|
||||
if self.use_bottleneck and self.new_cls:
|
||||
x = self.bottleneck(x)
|
||||
y = self.fc(x)
|
||||
return x, y
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
def get_parameters(self):
|
||||
if self.new_cls:
|
||||
if self.use_bottleneck:
|
||||
parameter_list = [{"params": self.features.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.classifier.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.bottleneck.parameters(), "lr_mult": 10, 'decay_mult': 2},
|
||||
{"params": self.fc.parameters(), "lr_mult": 10, 'decay_mult': 2}]
|
||||
else:
|
||||
parameter_list = [{"params": self.feature_layers.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.classifier.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.fc.parameters(), "lr_mult": 10, 'decay_mult': 2}]
|
||||
else:
|
||||
parameter_list = [
|
||||
{"params": self.parameters(), "lr_mult": 1, 'decay_mult': 2}]
|
||||
return parameter_list
|
||||
|
||||
|
||||
resnet_dict = {"ResNet18": models.resnet18, "ResNet34": models.resnet34,
|
||||
"ResNet50": models.resnet50, "ResNet101": models.resnet101, "ResNet152": models.resnet152}
|
||||
|
||||
|
||||
def grl_hook(coeff):
|
||||
def fun1(grad):
|
||||
return -coeff*grad.clone()
|
||||
return fun1
|
||||
|
||||
|
||||
class ResNetFc(nn.Module):
|
||||
def __init__(self, resnet_name, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000, ma=0.0):
|
||||
super(ResNetFc, self).__init__()
|
||||
model_resnet = resnet_dict[resnet_name](pretrained=True)
|
||||
self.conv1 = model_resnet.conv1
|
||||
self.bn1 = model_resnet.bn1
|
||||
self.relu = model_resnet.relu
|
||||
self.maxpool = model_resnet.maxpool
|
||||
self.layer1 = model_resnet.layer1
|
||||
self.layer2 = model_resnet.layer2
|
||||
self.layer3 = model_resnet.layer3
|
||||
self.layer4 = model_resnet.layer4
|
||||
self.avgpool = model_resnet.avgpool
|
||||
self.feature_layers = nn.Sequential(self.conv1, self.bn1, self.relu, self.maxpool,
|
||||
self.layer1, self.layer2, self.layer3, self.layer4, self.avgpool)
|
||||
|
||||
self.use_bottleneck = use_bottleneck
|
||||
self.new_cls = new_cls
|
||||
if new_cls:
|
||||
if self.use_bottleneck:
|
||||
self.bottleneck = nn.Linear(
|
||||
model_resnet.fc.in_features, bottleneck_dim)
|
||||
self.fc = nn.Linear(bottleneck_dim, class_num)
|
||||
self.bottleneck.apply(init_weights)
|
||||
self.fc.apply(init_weights)
|
||||
self.__in_features = bottleneck_dim
|
||||
else:
|
||||
self.fc = nn.Linear(model_resnet.fc.in_features, class_num)
|
||||
self.fc.apply(init_weights)
|
||||
self.__in_features = model_resnet.fc.in_features
|
||||
else:
|
||||
self.fc = model_resnet.fc
|
||||
self.__in_features = model_resnet.fc.in_features
|
||||
|
||||
self.im_weights_update = create_im_weights_update(self, ma, class_num)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.feature_layers(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
if self.use_bottleneck and self.new_cls:
|
||||
x = self.bottleneck(x)
|
||||
y = self.fc(x)
|
||||
return x, y
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
def get_parameters(self):
|
||||
if self.new_cls:
|
||||
if self.use_bottleneck:
|
||||
parameter_list = [{"params": self.feature_layers.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.bottleneck.parameters(
|
||||
), "lr_mult": 10, 'decay_mult': 2},
|
||||
{"params": self.fc.parameters(), "lr_mult": 10,
|
||||
'decay_mult': 2},
|
||||
{"params": self.im_weights, "lr_mult": 10, 'decay_mult': 2}]
|
||||
else:
|
||||
parameter_list = [{"params": self.feature_layers.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.fc.parameters(), "lr_mult": 10,
|
||||
'decay_mult': 2},
|
||||
{"params": self.im_weights, "lr_mult": 10, 'decay_mult': 2}]
|
||||
else:
|
||||
parameter_list = [{"params": self.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.im_weights, "lr_mult": 10, 'decay_mult': 2}]
|
||||
return parameter_list
|
||||
|
||||
|
||||
vgg_dict = {"VGG11": models.vgg11, "VGG13": models.vgg13, "VGG16": models.vgg16, "VGG19": models.vgg19,
|
||||
"VGG11BN": models.vgg11_bn, "VGG13BN": models.vgg13_bn, "VGG16BN": models.vgg16_bn, "VGG19BN": models.vgg19_bn}
|
||||
|
||||
|
||||
class VGGFc(nn.Module):
|
||||
def __init__(self, vgg_name, use_bottleneck=True, bottleneck_dim=256, new_cls=False, class_num=1000):
|
||||
super(VGGFc, self).__init__()
|
||||
model_vgg = vgg_dict[vgg_name](pretrained=True)
|
||||
self.features = model_vgg.features
|
||||
self.classifier = nn.Sequential()
|
||||
for i in range(6):
|
||||
self.classifier.add_module(
|
||||
"classifier"+str(i), model_vgg.classifier[i])
|
||||
self.feature_layers = nn.Sequential(self.features, self.classifier)
|
||||
|
||||
self.use_bottleneck = use_bottleneck
|
||||
self.new_cls = new_cls
|
||||
if new_cls:
|
||||
if self.use_bottleneck:
|
||||
self.bottleneck = nn.Linear(4096, bottleneck_dim)
|
||||
self.fc = nn.Linear(bottleneck_dim, class_num)
|
||||
self.bottleneck.apply(init_weights)
|
||||
self.fc.apply(init_weights)
|
||||
self.__in_features = bottleneck_dim
|
||||
else:
|
||||
self.fc = nn.Linear(4096, class_num)
|
||||
self.fc.apply(init_weights)
|
||||
self.__in_features = 4096
|
||||
else:
|
||||
self.fc = model_vgg.classifier[6]
|
||||
self.__in_features = 4096
|
||||
|
||||
def forward(self, x):
|
||||
x = self.features(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
x = self.classifier(x)
|
||||
if self.use_bottleneck and self.new_cls:
|
||||
x = self.bottleneck(x)
|
||||
y = self.fc(x)
|
||||
return x, y
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
def get_parameters(self):
|
||||
if self.new_cls:
|
||||
if self.use_bottleneck:
|
||||
parameter_list = [{"params": self.features.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.classifier.parameters(
|
||||
), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.bottleneck.parameters(
|
||||
), "lr_mult": 10, 'decay_mult': 2},
|
||||
{"params": self.fc.parameters(), "lr_mult": 10, 'decay_mult': 2}]
|
||||
else:
|
||||
parameter_list = [{"params": self.feature_layers.parameters(), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.classifier.parameters(
|
||||
), "lr_mult": 1, 'decay_mult': 2},
|
||||
{"params": self.fc.parameters(), "lr_mult": 10, 'decay_mult': 2}]
|
||||
else:
|
||||
parameter_list = [
|
||||
{"params": self.parameters(), "lr_mult": 1, 'decay_mult': 2}]
|
||||
return parameter_list
|
||||
|
||||
# For SVHN dataset
|
||||
|
||||
|
||||
class DTN(nn.Module):
|
||||
def __init__(self, ma=0.0):
|
||||
super(DTN, self).__init__()
|
||||
self.conv_params = nn.Sequential(
|
||||
nn.Conv2d(3, 64, kernel_size=5, stride=2, padding=2),
|
||||
nn.BatchNorm2d(64),
|
||||
nn.Dropout2d(0.1),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(64, 128, kernel_size=5, stride=2, padding=2),
|
||||
nn.BatchNorm2d(128),
|
||||
nn.Dropout2d(0.3),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(128, 256, kernel_size=5, stride=2, padding=2),
|
||||
nn.BatchNorm2d(256),
|
||||
nn.Dropout2d(0.5),
|
||||
nn.ReLU()
|
||||
)
|
||||
|
||||
self.fc_params = nn.Sequential(
|
||||
nn.Linear(256*4*4, 512),
|
||||
nn.BatchNorm1d(512),
|
||||
nn.ReLU(),
|
||||
nn.Dropout()
|
||||
)
|
||||
|
||||
class_num = 10
|
||||
|
||||
self.classifier = nn.Linear(512, class_num)
|
||||
self.__in_features = 512
|
||||
|
||||
self.im_weights_update = create_im_weights_update(self, ma, class_num)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.conv_params(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
x = self.fc_params(x)
|
||||
y = self.classifier(x)
|
||||
return x, y
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
|
||||
class LeNet(nn.Module):
|
||||
def __init__(self, ma=0.0):
|
||||
super(LeNet, self).__init__()
|
||||
self.conv_params = nn.Sequential(
|
||||
nn.Conv2d(1, 20, kernel_size=5),
|
||||
nn.MaxPool2d(2),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(20, 50, kernel_size=5),
|
||||
nn.Dropout2d(p=0.5),
|
||||
nn.MaxPool2d(2),
|
||||
nn.ReLU(),
|
||||
)
|
||||
|
||||
class_num = 10
|
||||
|
||||
self.fc_params = nn.Sequential(nn.Linear(50*4*4, 500), nn.ReLU(), nn.Dropout(p=0.5))
|
||||
self.classifier = nn.Linear(500, class_num)
|
||||
self.__in_features = 500
|
||||
|
||||
self.im_weights_update = create_im_weights_update(self, ma, class_num)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.conv_params(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
x = self.fc_params(x)
|
||||
y = self.classifier(x)
|
||||
return x, y
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
|
||||
class AdversarialNetwork(nn.Module):
|
||||
def __init__(self, in_feature, hidden_size, sigmoid=True):
|
||||
super(AdversarialNetwork, self).__init__()
|
||||
self.ad_layer1 = nn.Linear(in_feature, hidden_size)
|
||||
self.ad_layer2 = nn.Linear(hidden_size, hidden_size)
|
||||
self.ad_layer3 = nn.Linear(hidden_size, 1)
|
||||
self.relu1 = nn.ReLU()
|
||||
self.relu2 = nn.ReLU()
|
||||
self.dropout1 = nn.Dropout(0.5)
|
||||
self.dropout2 = nn.Dropout(0.5)
|
||||
self.sigmoid = sigmoid
|
||||
self.apply(init_weights)
|
||||
self.iter_num = 0
|
||||
self.alpha = 10
|
||||
self.low = 0.0
|
||||
self.high = 1.0
|
||||
self.max_iter = 10000.0
|
||||
|
||||
def forward(self, x):
|
||||
if self.training:
|
||||
self.iter_num += 1
|
||||
coeff = calc_coeff(self.iter_num, self.high, self.low, self.alpha, self.max_iter)
|
||||
x = x * 1.0
|
||||
x.register_hook(grl_hook(coeff))
|
||||
x = self.ad_layer1(x)
|
||||
x = self.relu1(x)
|
||||
x = self.dropout1(x)
|
||||
x = self.ad_layer2(x)
|
||||
x = self.relu2(x)
|
||||
x = self.dropout2(x)
|
||||
y = self.ad_layer3(x)
|
||||
if self.sigmoid:
|
||||
y = nn.Sigmoid()(y)
|
||||
return y
|
||||
|
||||
def output_num(self):
|
||||
return 1
|
||||
|
||||
def get_parameters(self):
|
||||
return [{"params": self.parameters(), "lr_mult": 10, 'decay_mult': 2}]
|
||||
|
||||
|
||||
class GradientReversalLayer(torch.autograd.Function):
|
||||
"""
|
||||
Implement the gradient reversal layer for the convenience of domain adaptation neural network.
|
||||
The forward part is the identity function while the backward part is the negative function.
|
||||
"""
|
||||
@staticmethod
|
||||
def forward(ctx, inputs):
|
||||
return inputs.view_as(inputs)
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grad_output):
|
||||
return grad_output.neg()
|
||||
|
||||
|
||||
def grad_reverse(tensor):
|
||||
return GradientReversalLayer.apply(tensor)
|
||||
|
||||
|
||||
class ConvNet(nn.Module):
|
||||
"""
|
||||
Vanilla CNN for classification.
|
||||
"""
|
||||
|
||||
def __init__(self, configs):
|
||||
super(ConvNet, self).__init__()
|
||||
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
|
||||
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
|
||||
self.fc = nn.Linear(1280, 100)
|
||||
self.softmax = nn.Linear(100, configs["num_classes"])
|
||||
self.num_classes = configs["num_classes"]
|
||||
|
||||
def forward(self, inputs):
|
||||
feats = F.relu(self.conv1(inputs))
|
||||
feats = F.relu(self.conv2(feats))
|
||||
feats = feats.view(-1, 1280)
|
||||
feats = F.relu(self.fc(feats))
|
||||
logprobs = F.log_softmax(self.softmax(feats), dim=1)
|
||||
return logprobs
|
||||
|
||||
|
||||
class ResNet50Fc(nn.Module):
|
||||
def __init__(self, ma=0.0, class_num=31, **kwargs):
|
||||
super(ResNet50Fc, self).__init__()
|
||||
model_resnet50 = models.resnet50(pretrained=True)
|
||||
self.conv1 = model_resnet50.conv1
|
||||
self.bn1 = model_resnet50.bn1
|
||||
self.relu = model_resnet50.relu
|
||||
self.maxpool = model_resnet50.maxpool
|
||||
self.layer1 = model_resnet50.layer1
|
||||
self.layer2 = model_resnet50.layer2
|
||||
self.layer3 = model_resnet50.layer3
|
||||
self.layer4 = model_resnet50.layer4
|
||||
self.avgpool = model_resnet50.avgpool
|
||||
self.__in_features = model_resnet50.fc.in_features
|
||||
|
||||
self.im_weights_update = create_im_weights_update(self, ma, class_num)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.conv1(x)
|
||||
x = self.bn1(x)
|
||||
x = self.relu(x)
|
||||
x = self.maxpool(x)
|
||||
x = self.layer1(x)
|
||||
x = self.layer2(x)
|
||||
x = self.layer3(x)
|
||||
x = self.layer4(x)
|
||||
x = self.avgpool(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
return x
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
|
||||
class LeNetMMD(nn.Module):
|
||||
def __init__(self, ma=0.0, **kwargs):
|
||||
super(LeNetMMD, self).__init__()
|
||||
self.conv_params = nn.Sequential(
|
||||
nn.Conv2d(1, 20, kernel_size=5),
|
||||
nn.MaxPool2d(2),
|
||||
nn.ReLU(),
|
||||
nn.Conv2d(20, 50, kernel_size=5),
|
||||
nn.Dropout2d(p=0.5),
|
||||
nn.MaxPool2d(2),
|
||||
nn.ReLU(),
|
||||
)
|
||||
|
||||
class_num = 10
|
||||
|
||||
self.fc_params = nn.Sequential(
|
||||
nn.Linear(50*4*4, 500), nn.ReLU(), nn.Dropout(p=0.5))
|
||||
# self.__in_features = 500
|
||||
self.__in_features = 800
|
||||
|
||||
self.im_weights_update = create_im_weights_update(self, ma, class_num)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.conv_params(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
# x = self.fc_params(x)
|
||||
return x
|
||||
|
||||
def output_num(self):
|
||||
return self.__in_features
|
||||
|
||||
|
||||
network_dict = {"ResNet50" : ResNet50Fc, "LeNet": LeNetMMD}
|
||||
|
||||
|
||||
def create_im_weights_update(class_inst, ma, class_num):
|
||||
|
||||
# Label importance weights.
|
||||
class_inst.ma = ma
|
||||
class_inst.im_weights = nn.Parameter(
|
||||
torch.ones(class_num, 1), requires_grad=False)
|
||||
|
||||
def im_weights_update(source_y, target_y, cov, device, inst=class_inst):
|
||||
"""
|
||||
Solve a Quadratic Program to compute the optimal importance weight under the generalized label shift assumption.
|
||||
:param source_y: The marginal label distribution of the source domain.
|
||||
:param target_y: The marginal pseudo-label distribution of the target domain from the current classifier.
|
||||
:param cov: The covariance matrix of predicted-label and true label of the source domain.
|
||||
:param device: Device of the operation.
|
||||
:return:
|
||||
"""
|
||||
# Convert all the vectors to column vectors.
|
||||
dim = cov.shape[0]
|
||||
source_y = source_y.reshape(-1, 1).astype(np.double)
|
||||
target_y = target_y.reshape(-1, 1).astype(np.double)
|
||||
cov = cov.astype(np.double)
|
||||
|
||||
P = matrix(np.dot(cov.T, cov), tc="d")
|
||||
q = -matrix(np.dot(cov, target_y), tc="d")
|
||||
G = matrix(-np.eye(dim), tc="d")
|
||||
h = matrix(np.zeros(dim), tc="d")
|
||||
A = matrix(source_y.reshape(1, -1), tc="d")
|
||||
b = matrix([1.0], tc="d")
|
||||
sol = solvers.qp(P, q, G, h, A, b)
|
||||
new_im_weights = np.array(sol["x"])
|
||||
|
||||
# EMA for the weights
|
||||
inst.im_weights.data = (1 - inst.ma) * torch.tensor(
|
||||
new_im_weights, dtype=torch.float32).to(device) + inst.ma * inst.im_weights.data
|
||||
|
||||
return im_weights_update
|
|
@ -0,0 +1,256 @@
|
|||
import numpy as np
|
||||
from torchvision import transforms
|
||||
import os
|
||||
from PIL import Image, ImageOps
|
||||
import numbers
|
||||
import torch
|
||||
|
||||
class ResizeImage():
|
||||
def __init__(self, size):
|
||||
if isinstance(size, int):
|
||||
self.size = (int(size), int(size))
|
||||
else:
|
||||
self.size = size
|
||||
def __call__(self, img):
|
||||
th, tw = self.size
|
||||
return img.resize((th, tw))
|
||||
|
||||
class RandomSizedCrop(object):
|
||||
"""Crop the given PIL.Image to random size and aspect ratio.
|
||||
A crop of random size of (0.08 to 1.0) of the original size and a random
|
||||
aspect ratio of 3/4 to 4/3 of the original aspect ratio is made. This crop
|
||||
is finally resized to given size.
|
||||
This is popularly used to train the Inception networks.
|
||||
Args:
|
||||
size: size of the smaller edge
|
||||
interpolation: Default: PIL.Image.BILINEAR
|
||||
"""
|
||||
|
||||
def __init__(self, size, interpolation=Image.BILINEAR):
|
||||
self.size = size
|
||||
self.interpolation = interpolation
|
||||
|
||||
def __call__(self, img):
|
||||
h_off = random.randint(0, img.shape[1]-self.size)
|
||||
w_off = random.randint(0, img.shape[2]-self.size)
|
||||
img = img[:, h_off:h_off+self.size, w_off:w_off+self.size]
|
||||
return img
|
||||
|
||||
|
||||
class Normalize(object):
|
||||
"""Normalize an tensor image with mean and standard deviation.
|
||||
Given mean: (R, G, B),
|
||||
will normalize each channel of the torch.*Tensor, i.e.
|
||||
channel = channel - mean
|
||||
Args:
|
||||
mean (sequence): Sequence of means for R, G, B channels respecitvely.
|
||||
"""
|
||||
|
||||
def __init__(self, mean=None, meanfile=None):
|
||||
if mean:
|
||||
self.mean = mean
|
||||
else:
|
||||
arr = np.load(meanfile)
|
||||
self.mean = torch.from_numpy(arr.astype('float32')/255.0)[[2,1,0],:,:]
|
||||
|
||||
def __call__(self, tensor):
|
||||
"""
|
||||
Args:
|
||||
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
|
||||
Returns:
|
||||
Tensor: Normalized image.
|
||||
"""
|
||||
# TODO: make efficient
|
||||
for t, m in zip(tensor, self.mean):
|
||||
t.sub_(m)
|
||||
return tensor
|
||||
|
||||
|
||||
|
||||
class PlaceCrop(object):
|
||||
"""Crops the given PIL.Image at the particular index.
|
||||
Args:
|
||||
size (sequence or int): Desired output size of the crop. If size is an
|
||||
int instead of sequence like (w, h), a square crop (size, size) is
|
||||
made.
|
||||
"""
|
||||
|
||||
def __init__(self, size, start_x, start_y):
|
||||
if isinstance(size, int):
|
||||
self.size = (int(size), int(size))
|
||||
else:
|
||||
self.size = size
|
||||
self.start_x = start_x
|
||||
self.start_y = start_y
|
||||
|
||||
def __call__(self, img):
|
||||
"""
|
||||
Args:
|
||||
img (PIL.Image): Image to be cropped.
|
||||
Returns:
|
||||
PIL.Image: Cropped image.
|
||||
"""
|
||||
th, tw = self.size
|
||||
return img.crop((self.start_x, self.start_y, self.start_x + tw, self.start_y + th))
|
||||
|
||||
|
||||
class ForceFlip(object):
|
||||
"""Horizontally flip the given PIL.Image randomly with a probability of 0.5."""
|
||||
|
||||
def __call__(self, img):
|
||||
"""
|
||||
Args:
|
||||
img (PIL.Image): Image to be flipped.
|
||||
Returns:
|
||||
PIL.Image: Randomly flipped image.
|
||||
"""
|
||||
return img.transpose(Image.FLIP_LEFT_RIGHT)
|
||||
|
||||
|
||||
class CenterCrop(object):
|
||||
"""Crops the given PIL.Image at the center.
|
||||
Args:
|
||||
size (sequence or int): Desired output size of the crop. If size is an
|
||||
int instead of sequence like (h, w), a square crop (size, size) is
|
||||
made.
|
||||
"""
|
||||
|
||||
def __init__(self, size):
|
||||
if isinstance(size, numbers.Number):
|
||||
self.size = (int(size), int(size))
|
||||
else:
|
||||
self.size = size
|
||||
|
||||
def __call__(self, img):
|
||||
"""
|
||||
Args:
|
||||
img (PIL.Image): Image to be cropped.
|
||||
Returns:
|
||||
PIL.Image: Cropped image.
|
||||
"""
|
||||
w, h = (img.shape[1], img.shape[2])
|
||||
th, tw = self.size
|
||||
w_off = int((w - tw) / 2.)
|
||||
h_off = int((h - th) / 2.)
|
||||
img = img[:, h_off:h_off+th, w_off:w_off+tw]
|
||||
return img
|
||||
|
||||
|
||||
def image_train(resize_size=256, crop_size=224, alexnet=False, LeNet=False):
|
||||
|
||||
if LeNet:
|
||||
return transforms.Compose([
|
||||
transforms.Resize((28, 28)),
|
||||
transforms.ToTensor(),
|
||||
transforms.Normalize((0.5,), (0.5,))])
|
||||
|
||||
if not alexnet:
|
||||
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
|
||||
std=[0.229, 0.224, 0.225])
|
||||
else:
|
||||
normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy')
|
||||
return transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
transforms.RandomResizedCrop(crop_size),
|
||||
transforms.RandomHorizontalFlip(),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
])
|
||||
|
||||
|
||||
def image_test(resize_size=256, crop_size=224, alexnet=False, LeNet=False):
|
||||
|
||||
if LeNet:
|
||||
return transforms.Compose([
|
||||
transforms.Resize((28, 28)),
|
||||
transforms.ToTensor(),
|
||||
transforms.Normalize((0.5,), (0.5,))])
|
||||
|
||||
if not alexnet:
|
||||
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
|
||||
std=[0.229, 0.224, 0.225])
|
||||
else:
|
||||
normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy')
|
||||
start_first = 0
|
||||
start_center = (resize_size - crop_size - 1) / 2
|
||||
start_last = resize_size - crop_size - 1
|
||||
|
||||
return transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
PlaceCrop(crop_size, start_center, start_center),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
])
|
||||
|
||||
def image_test_10crop(resize_size=256, crop_size=224, alexnet=False):
|
||||
if not alexnet:
|
||||
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
|
||||
std=[0.229, 0.224, 0.225])
|
||||
else:
|
||||
normalize = Normalize(meanfile='./ilsvrc_2012_mean.npy')
|
||||
start_first = 0
|
||||
start_center = (resize_size - crop_size - 1) / 2
|
||||
start_last = resize_size - crop_size - 1
|
||||
data_transforms = [
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),ForceFlip(),
|
||||
PlaceCrop(crop_size, start_first, start_first),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),ForceFlip(),
|
||||
PlaceCrop(crop_size, start_last, start_last),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),ForceFlip(),
|
||||
PlaceCrop(crop_size, start_last, start_first),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),ForceFlip(),
|
||||
PlaceCrop(crop_size, start_first, start_last),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),ForceFlip(),
|
||||
PlaceCrop(crop_size, start_center, start_center),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
PlaceCrop(crop_size, start_first, start_first),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
PlaceCrop(crop_size, start_last, start_last),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
PlaceCrop(crop_size, start_last, start_first),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
PlaceCrop(crop_size, start_first, start_last),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
]),
|
||||
transforms.Compose([
|
||||
ResizeImage(resize_size),
|
||||
PlaceCrop(crop_size, start_center, start_center),
|
||||
transforms.ToTensor(),
|
||||
normalize
|
||||
])
|
||||
]
|
||||
return data_transforms
|
|
@ -0,0 +1,391 @@
|
|||
import argparse
|
||||
import numpy as np
|
||||
import os
|
||||
import pickle
|
||||
import scipy.stats
|
||||
import sys
|
||||
import time
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import torch.optim as optim
|
||||
from torchvision import datasets, transforms
|
||||
|
||||
import loss as loss_func
|
||||
import network
|
||||
from data_list import build_uspsmnist, sample_ratios, subsampling
|
||||
|
||||
|
||||
def write_list(f, l):
|
||||
f.write(",".join(map(str, l)) + "\n")
|
||||
f.flush()
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
def train(args, model, ad_net,
|
||||
source_samples, source_labels, target_samples, target_labels, optimizer, optimizer_ad,
|
||||
epoch, start_epoch, method,
|
||||
source_label_distribution, out_wei_file,
|
||||
cov_mat, pseudo_target_label, class_weights, true_weights):
|
||||
model.train()
|
||||
|
||||
cov_mat[:] = 0.0
|
||||
pseudo_target_label[:] = 0.0
|
||||
|
||||
len_source = source_labels.shape[0]
|
||||
len_target = target_labels.shape[0]
|
||||
|
||||
size = max(len_source, len_target)
|
||||
num_iter = int(size / args.batch_size)
|
||||
|
||||
for batch_idx in range(num_iter):
|
||||
t = time.time()
|
||||
source_idx = np.random.choice(len_source, args.batch_size)
|
||||
target_idx = np.random.choice(len_target, args.batch_size)
|
||||
data_source, label_source = source_samples[source_idx], source_labels[source_idx]
|
||||
data_target, _ = target_samples[target_idx], target_labels[target_idx]
|
||||
|
||||
optimizer.zero_grad()
|
||||
optimizer_ad.zero_grad()
|
||||
feature, output = model(torch.cat((data_source, data_target), 0))
|
||||
|
||||
if 'IW' in method:
|
||||
ys_onehot = torch.zeros(args.batch_size, 10).to(args.device)
|
||||
ys_onehot.scatter_(1, label_source.view(-1, 1), 1)
|
||||
# Compute weights on source data.
|
||||
if 'ORACLE' in method:
|
||||
weights = torch.mm(ys_onehot, true_weights)
|
||||
else:
|
||||
weights = torch.mm(ys_onehot, model.im_weights)
|
||||
|
||||
source_preds, target_preds = output[:
|
||||
args.batch_size], output[args.batch_size:]
|
||||
# Compute the aggregated distribution of pseudo-label on the target domain.
|
||||
pseudo_target_label += torch.sum(
|
||||
F.softmax(target_preds, dim=1), dim=0).view(-1, 1).detach()
|
||||
# Update the covariance matrix on the source domain as well.
|
||||
cov_mat += torch.mm(F.softmax(source_preds,
|
||||
dim=1).transpose(1, 0), ys_onehot).detach()
|
||||
|
||||
loss = torch.mean(
|
||||
nn.CrossEntropyLoss(weight=class_weights, reduction='none')
|
||||
(output.narrow(0, 0, data_source.size(0)), label_source) * weights) / 10.0
|
||||
else:
|
||||
loss = nn.CrossEntropyLoss()(output.narrow(0, 0, data_source.size(0)), label_source)
|
||||
|
||||
if epoch > start_epoch:
|
||||
if method == 'CDAN-E':
|
||||
softmax_output = nn.Softmax(dim=1)(output)
|
||||
entropy = loss_func.Entropy(softmax_output)
|
||||
loss += loss_func.CDAN([feature, softmax_output], ad_net, entropy, network.calc_coeff(
|
||||
num_iter*(epoch-start_epoch)+batch_idx), None, device=args.device)
|
||||
|
||||
elif 'IWCDAN-E' in method:
|
||||
softmax_output = nn.Softmax(dim=1)(output)
|
||||
entropy = loss_func.Entropy(softmax_output)
|
||||
loss += loss_func.CDAN([feature, softmax_output], ad_net, entropy, network.calc_coeff(
|
||||
num_iter*(epoch-start_epoch)+batch_idx), None, weights=weights, device=args.device)
|
||||
|
||||
elif method == 'CDAN':
|
||||
softmax_output = nn.Softmax(dim=1)(output)
|
||||
loss += loss_func.CDAN([feature, softmax_output],
|
||||
ad_net, None, None, None, device=args.device)
|
||||
|
||||
elif 'IWCDAN' in method:
|
||||
softmax_output = nn.Softmax(dim=1)(output)
|
||||
loss += loss_func.CDAN([feature, softmax_output],
|
||||
ad_net, None, None, None, weights=weights, device=args.device)
|
||||
|
||||
elif method == 'DANN':
|
||||
loss += loss_func.DANN(feature, ad_net, args.device)
|
||||
|
||||
elif 'IWDAN' in method:
|
||||
dloss = loss_func.IWDAN(feature, ad_net, weights)
|
||||
loss += args.mu * dloss
|
||||
|
||||
elif method == 'NANN':
|
||||
pass
|
||||
|
||||
else:
|
||||
raise ValueError('Method cannot be recognized.')
|
||||
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
if epoch > start_epoch and method != 'NANN':
|
||||
optimizer_ad.step()
|
||||
|
||||
if 'IW' in method and epoch > start_epoch:
|
||||
pseudo_target_label /= args.batch_size * num_iter
|
||||
cov_mat /= args.batch_size * num_iter
|
||||
# Recompute the importance weight by solving a QP.
|
||||
model.im_weights_update(source_label_distribution,
|
||||
pseudo_target_label.cpu().detach().numpy(),
|
||||
cov_mat.cpu().detach().numpy(),
|
||||
args.device)
|
||||
current_weights = [round(x, 4) for x in model.im_weights.data.cpu().numpy().flatten()]
|
||||
write_list(out_wei_file, [np.linalg.norm(
|
||||
current_weights - true_weights.cpu().numpy().flatten())] + current_weights)
|
||||
print(np.linalg.norm(current_weights - true_weights.cpu().numpy().flatten()), current_weights)
|
||||
|
||||
|
||||
def test(args, epoch, model, test_samples, test_labels, start_time_test, out_log_file, name=''):
|
||||
model.eval()
|
||||
test_loss = 0
|
||||
correct = 0
|
||||
len_test = test_labels.shape[0]
|
||||
|
||||
for i in range(len_test):
|
||||
data, target = test_samples[i].unsqueeze(0), test_labels[i].unsqueeze(0)
|
||||
_, output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
pred = output.data.cpu().max(1, keepdim=True)[1]
|
||||
correct += pred.eq(target.data.cpu().view_as(pred)).sum().item()
|
||||
|
||||
test_loss /= len_test
|
||||
temp_acc = 100. * correct / len_test
|
||||
log_str = " {}, iter: {:05d}, sec: {:.0f}, loss: {:.5f}, accuracy: {}/{}, precision: {:.5f}".format(name, epoch, time.time() - start_time_test, test_loss, correct, len_test, temp_acc)
|
||||
print(log_str)
|
||||
sys.stdout.flush()
|
||||
out_log_file.write(log_str+"\n")
|
||||
out_log_file.flush()
|
||||
|
||||
|
||||
def main():
|
||||
# Training settings
|
||||
parser = argparse.ArgumentParser(description='CDAN USPS MNIST')
|
||||
parser.add_argument('method', type=str, default='CDAN-E',
|
||||
choices=['CDAN', 'CDAN-E', 'DANN', 'IWDAN', 'NANN', 'IWDANORACLE', 'IWCDAN', 'IWCDANORACLE', 'IWCDAN-E', 'IWCDAN-EORACLE'])
|
||||
parser.add_argument('--task', default='mnist2usps', help='task to perform', choices=['usps2mnist', 'mnist2usps'])
|
||||
parser.add_argument('--batch_size', type=int, default=64,
|
||||
help='input batch size for training (default: 64)')
|
||||
parser.add_argument('--test_batch_size', type=int, default=1000,
|
||||
help='input batch size for testing (default: 1000)')
|
||||
parser.add_argument('--epochs', type=int, default=70, metavar='N',
|
||||
help='number of epochs to train (default: 70)')
|
||||
parser.add_argument('--lr', type=float, default=0.0, metavar='LR',
|
||||
help='learning rate (default: 0.02)')
|
||||
parser.add_argument('--momentum', type=float, default=0.5, metavar='M',
|
||||
help='SGD momentum (default: 0.5)')
|
||||
parser.add_argument('--seed', type=int, default=42, metavar='S',
|
||||
help='random seed (default: 42)')
|
||||
parser.add_argument('--log_interval', type=int, default=50,
|
||||
help='how many batches to wait before logging training status')
|
||||
parser.add_argument('--root_folder', type=str, default='data/usps2mnist/', help="The folder containing the datasets and the lists")
|
||||
parser.add_argument('--output_dir', type=str, default='results', help="output directory")
|
||||
parser.add_argument("-u", "--mu", help="Hyperparameter of the coefficient of the domain adversarial loss", type=float, default=1.0)
|
||||
parser.add_argument('--ratio', type =float, default=0, help='ratio option')
|
||||
parser.add_argument('--ma', type=float, default=0.5, help='weight for the moving average of iw')
|
||||
args = parser.parse_args()
|
||||
args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
|
||||
# Running the JSD experiment on fewer epochs for efficiency
|
||||
if args.ratio >= 100:
|
||||
args.epochs = 25
|
||||
|
||||
print('Running {} on {} for {} epochs on task {}'.format(
|
||||
args.method, args.device, args.epochs, args.task))
|
||||
|
||||
# Set random number seed.
|
||||
np.random.seed(args.seed)
|
||||
torch.manual_seed(args.seed)
|
||||
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
|
||||
|
||||
if args.task == 'usps2mnist':
|
||||
|
||||
# CDAN parameters
|
||||
decay_epoch = 6
|
||||
decay_frac = 0.5
|
||||
lr = 0.02
|
||||
start_epoch = 1
|
||||
model = network.LeNet(args.ma)
|
||||
build_dataset = build_uspsmnist
|
||||
|
||||
source_list = os.path.join(args.root_folder, 'usps_train.txt')
|
||||
source_path = os.path.join(args.root_folder, 'usps_train_dataset.pkl')
|
||||
target_list = os.path.join(args.root_folder, 'mnist_train.txt')
|
||||
target_path = os.path.join(args.root_folder, 'mnist_train_dataset.pkl')
|
||||
test_list = os.path.join(args.root_folder, 'mnist_test.txt')
|
||||
test_path = os.path.join(args.root_folder, 'mnist_test_dataset.pkl')
|
||||
|
||||
elif args.task == 'mnist2usps':
|
||||
|
||||
decay_epoch = 5
|
||||
decay_frac = 0.5
|
||||
lr = 0.02
|
||||
start_epoch = 1
|
||||
model = network.LeNet(args.ma)
|
||||
build_dataset = build_uspsmnist
|
||||
|
||||
source_list = os.path.join(args.root_folder, 'mnist_train.txt')
|
||||
source_path = os.path.join(args.root_folder, 'mnist_train_dataset.pkl')
|
||||
target_list = os.path.join(args.root_folder, 'usps_train.txt')
|
||||
target_path = os.path.join(args.root_folder, 'usps_train_dataset.pkl')
|
||||
test_list = os.path.join(args.root_folder, 'usps_test.txt')
|
||||
test_path = os.path.join(args.root_folder, 'usps_test_dataset.pkl')
|
||||
|
||||
else:
|
||||
raise Exception('Task cannot be recognized!')
|
||||
|
||||
out_log_file = open(os.path.join(args.output_dir, "log.txt"), "w")
|
||||
out_log_file_train = open(os.path.join(
|
||||
args.output_dir, "log_train.txt"), "w")
|
||||
if not os.path.exists(args.output_dir):
|
||||
os.mkdir(args.output_dir)
|
||||
|
||||
model = model.to(args.device)
|
||||
class_num = 10
|
||||
|
||||
if args.lr > 0:
|
||||
lr = args.lr
|
||||
|
||||
print('Starting loading data')
|
||||
sys.stdout.flush()
|
||||
t_data = time.time()
|
||||
if os.path.exists(source_path):
|
||||
print('Found existing dataset for source')
|
||||
with open(source_path, 'rb') as f:
|
||||
[source_samples, source_labels] = pickle.load(f)
|
||||
source_samples, source_labels = torch.Tensor(source_samples).to(
|
||||
args.device), torch.LongTensor(source_labels).to(args.device)
|
||||
else:
|
||||
print('Building dataset for source and writing to {}'.format(source_path))
|
||||
source_samples, source_labels = build_dataset(
|
||||
source_list, source_path, args.root_folder, args.device)
|
||||
|
||||
if os.path.exists(target_path):
|
||||
print('Found existing dataset for target')
|
||||
with open(target_path, 'rb') as f:
|
||||
[target_samples, target_labels] = pickle.load(f)
|
||||
target_samples, target_labels = torch.Tensor(
|
||||
target_samples).to(args.device), torch.LongTensor(target_labels).to(args.device)
|
||||
else:
|
||||
print('Building dataset for target and writing to {}'.format(target_path))
|
||||
target_samples, target_labels = build_dataset(
|
||||
target_list, target_path, args.root_folder, args.device)
|
||||
|
||||
if os.path.exists(test_path):
|
||||
print('Found existing dataset for test')
|
||||
with open(test_path, 'rb') as f:
|
||||
[test_samples, test_labels] = pickle.load(f)
|
||||
test_samples, test_labels = torch.Tensor(
|
||||
test_samples).to(args.device), torch.LongTensor(test_labels).to(args.device)
|
||||
else:
|
||||
print('Building dataset for test and writing to {}'.format(test_path))
|
||||
test_samples, test_labels = build_dataset(
|
||||
test_list, test_path, args.root_folder, args.device)
|
||||
|
||||
print('Data loaded in {}'.format(time.time() - t_data))
|
||||
|
||||
if args.ratio == 1:
|
||||
# RATIO OPTION 1
|
||||
# 30% of the samples from the first 5 classes
|
||||
print('Using option 1, ie [0.3] * 5 + [1] * 5')
|
||||
ratios_source = [0.3] * 5 + [1] * 5
|
||||
ratios_target = [1] * 10
|
||||
elif args.ratio >= 200:
|
||||
s_ = subsampling[int(args.ratio) % 100]
|
||||
ratios_source = s_[0]
|
||||
ratios_target = [1] * 10
|
||||
print('Using random subset ratio {} of the source, with theoretical jsd {}'.format(args.ratio, s_[1]))
|
||||
elif 200 > args.ratio >= 100:
|
||||
s_ = subsampling[int(args.ratio) % 100]
|
||||
ratios_source = [1] * 10
|
||||
ratios_target = s_[0]
|
||||
print('Using random subset ratio {} of the target, with theoretical jsd {}'.format(args.ratio, s_[1]))
|
||||
else:
|
||||
# ORIGINAL DATASETS
|
||||
print('Using original datasets')
|
||||
ratios_source = [1] * 10
|
||||
ratios_target = [1] * 10
|
||||
ratios_test = ratios_target
|
||||
|
||||
# Subsample dataset if need be
|
||||
source_samples, source_labels = sample_ratios(
|
||||
source_samples, source_labels, ratios_source)
|
||||
target_samples, target_labels = sample_ratios(
|
||||
target_samples, target_labels, ratios_target)
|
||||
test_samples, test_labels = sample_ratios(
|
||||
test_samples, test_labels, ratios_test)
|
||||
|
||||
# compute labels distribution on the source and target domain
|
||||
source_label_distribution = np.zeros((class_num))
|
||||
for img in source_labels:
|
||||
source_label_distribution[int(img.item())] += 1
|
||||
print("Total source samples: {}".format(
|
||||
np.sum(source_label_distribution)), flush=True)
|
||||
print("Source samples per class: {}".format(source_label_distribution))
|
||||
source_label_distribution /= np.sum(source_label_distribution)
|
||||
write_list(out_log_file, source_label_distribution)
|
||||
print("Source label distribution: {}".format(source_label_distribution))
|
||||
target_label_distribution = np.zeros((class_num))
|
||||
for img in target_labels:
|
||||
target_label_distribution[int(img.item())] += 1
|
||||
print("Total target samples: {}".format(
|
||||
np.sum(target_label_distribution)), flush=True)
|
||||
print("Target samples per class: {}".format(target_label_distribution))
|
||||
target_label_distribution /= np.sum(target_label_distribution)
|
||||
write_list(out_log_file, target_label_distribution)
|
||||
print("Target label distribution: {}".format(target_label_distribution))
|
||||
test_label_distribution = np.zeros((class_num))
|
||||
for img in test_labels:
|
||||
test_label_distribution[int(img.item())] += 1
|
||||
print("Test samples per class: {}".format(test_label_distribution))
|
||||
test_label_distribution /= np.sum(test_label_distribution)
|
||||
write_list(out_log_file, test_label_distribution)
|
||||
print("Test label distribution: {}".format(test_label_distribution))
|
||||
mixture = (source_label_distribution + target_label_distribution) / 2
|
||||
jsd = (scipy.stats.entropy(source_label_distribution, qk=mixture)
|
||||
+ scipy.stats.entropy(target_label_distribution, qk=mixture)) / 2
|
||||
print("JSD source to target : {}".format(jsd))
|
||||
mixture_2 = (test_label_distribution + target_label_distribution) / 2
|
||||
jsd_2 = (scipy.stats.entropy(test_label_distribution, qk=mixture_2)
|
||||
+ scipy.stats.entropy(target_label_distribution, qk=mixture_2)) / 2
|
||||
print("JSD test to target : {}".format(jsd_2))
|
||||
out_wei_file = open(os.path.join(args.output_dir, "log_weights_{}.txt".format(jsd)), "w")
|
||||
write_list(out_wei_file, [round(x, 4) for x in source_label_distribution])
|
||||
write_list(out_wei_file, [round(x, 4) for x in target_label_distribution])
|
||||
out_wei_file.write(str(jsd) + "\n")
|
||||
true_weights = torch.tensor(
|
||||
target_label_distribution / source_label_distribution, dtype=torch.float, requires_grad=False)[:, None].to(args.device)
|
||||
print("True weights : {}".format(true_weights[:, 0].cpu().numpy()))
|
||||
|
||||
if 'CDAN' in args.method:
|
||||
ad_net = network.AdversarialNetwork(
|
||||
model.output_num() * class_num, 500, sigmoid='WDANN' not in args.method)
|
||||
else:
|
||||
ad_net = network.AdversarialNetwork(
|
||||
model.output_num(), 500, sigmoid='WDANN' not in args.method)
|
||||
|
||||
ad_net = ad_net.to(args.device)
|
||||
|
||||
optimizer = optim.SGD(model.parameters(), lr=lr,
|
||||
weight_decay=0.0005, momentum=0.9)
|
||||
optimizer_ad = optim.SGD(
|
||||
ad_net.parameters(), lr=lr, weight_decay=0.0005, momentum=0.9)
|
||||
|
||||
# Maintain two quantities for the QP.
|
||||
cov_mat = torch.tensor(np.zeros((class_num, class_num), dtype=np.float32),
|
||||
requires_grad=False).to(args.device)
|
||||
pseudo_target_label = torch.tensor(np.zeros((class_num, 1), dtype=np.float32),
|
||||
requires_grad=False).to(args.device)
|
||||
# Maintain one weight vector for BER.
|
||||
class_weights = torch.tensor(
|
||||
1.0 / source_label_distribution, dtype=torch.float, requires_grad=False).to(args.device)
|
||||
|
||||
for epoch in range(1, args.epochs + 1):
|
||||
start_time_test = time.time()
|
||||
if epoch % decay_epoch == 0:
|
||||
for param_group in optimizer.param_groups:
|
||||
param_group["lr"] = param_group["lr"] * decay_frac
|
||||
test(args, epoch, model, test_samples, test_labels, start_time_test, out_log_file, name='Target test')
|
||||
train(args, model, ad_net, source_samples,
|
||||
source_labels, target_samples, target_labels,
|
||||
optimizer, optimizer_ad, epoch, start_epoch, args.method, source_label_distribution, out_wei_file, cov_mat, pseudo_target_label, class_weights, true_weights)
|
||||
test(args, epoch+1, model, test_samples, test_labels, start_time_test, out_log_file, name='Target test')
|
||||
test(args, epoch+1, model, source_samples, source_labels,
|
||||
start_time_test, out_log_file_train, name='Source train')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,502 @@
|
|||
import argparse
|
||||
import numpy as np
|
||||
import os
|
||||
import os.path as osp
|
||||
import pickle
|
||||
import scipy.stats
|
||||
import sys
|
||||
import time
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.optim as optim
|
||||
from torch.autograd import Variable
|
||||
from torch.utils.data import DataLoader
|
||||
import torch.nn.functional as F
|
||||
|
||||
import data_list
|
||||
from data_list import ImageList, LoadedImageList, sample_ratios, write_list
|
||||
import loss
|
||||
import lr_schedule
|
||||
import math
|
||||
import network
|
||||
import pre_process as prep
|
||||
import random
|
||||
|
||||
|
||||
|
||||
|
||||
def image_classification_test_loaded(test_samples, test_labels, model, test_10crop=True, device='cpu'):
|
||||
with torch.no_grad():
|
||||
test_loss = 0
|
||||
correct = 0
|
||||
if test_10crop:
|
||||
len_test = test_labels[0].shape[0]
|
||||
for i in range(len_test):
|
||||
outputs = []
|
||||
for j in range(10):
|
||||
data, target = test_samples[j][i, :, :, :].unsqueeze(0), test_labels[j][i].unsqueeze(0)
|
||||
_, output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
outputs.append(nn.Softmax(dim=1)(output))
|
||||
outputs = sum(outputs)
|
||||
pred = torch.max(outputs, 1)[1]
|
||||
correct += pred.eq(target.data.cpu().view_as(pred)).sum().item()
|
||||
else:
|
||||
len_test = test_labels.shape[0]
|
||||
bs = 72
|
||||
for i in range(int(len_test / bs)):
|
||||
data, target = torch.Tensor(test_samples[bs*i:bs*(i+1), :, :, :]).to(config["device"]), test_labels[bs*i:bs*(i+1)]
|
||||
_, output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
pred = torch.max(output, 1)[1]
|
||||
correct += pred.eq(target.data.view_as(pred)).sum().item()
|
||||
# Last test samples
|
||||
data, target = torch.Tensor(test_samples[bs*(i+1):, :, :, :]).to(config["device"]), test_labels[bs*(i+1):]
|
||||
_, output = model(data)
|
||||
test_loss += nn.CrossEntropyLoss()(output, target).item()
|
||||
pred = torch.max(output, 1)[1]
|
||||
correct += pred.eq(target.data.view_as(pred)).sum().item()
|
||||
accuracy = correct / len_test
|
||||
test_loss /= len_test * 10
|
||||
return accuracy
|
||||
|
||||
|
||||
def train(config):
|
||||
|
||||
## Define start time
|
||||
start_time = time.time()
|
||||
|
||||
## set pre-process
|
||||
prep_dict = {}
|
||||
prep_config = config["prep"]
|
||||
prep_dict["source"] = prep.image_train(**config["prep"]['params'])
|
||||
prep_dict["target"] = prep.image_train(**config["prep"]['params'])
|
||||
if prep_config["test_10crop"]:
|
||||
prep_dict["test"] = prep.image_test_10crop(**config["prep"]['params'])
|
||||
else:
|
||||
prep_dict["test"] = prep.image_test(**config["prep"]['params'])
|
||||
|
||||
## prepare data
|
||||
print("Preparing data", flush=True)
|
||||
dsets = {}
|
||||
dset_loaders = {}
|
||||
data_config = config["data"]
|
||||
train_bs = data_config["source"]["batch_size"]
|
||||
test_bs = data_config["test"]["batch_size"]
|
||||
root_folder = data_config["root_folder"]
|
||||
dsets["source"] = ImageList(open(osp.join(root_folder, data_config["source"]["list_path"])).readlines(), \
|
||||
transform=prep_dict["source"], root_folder=root_folder, ratios=config["ratios_source"])
|
||||
dset_loaders["source"] = DataLoader(dsets["source"], batch_size=train_bs, \
|
||||
shuffle=True, num_workers=4, drop_last=True)
|
||||
dsets["target"] = ImageList(open(osp.join(root_folder, data_config["target"]["list_path"])).readlines(), \
|
||||
transform=prep_dict["target"], root_folder=root_folder, ratios=config["ratios_target"])
|
||||
dset_loaders["target"] = DataLoader(dsets["target"], batch_size=train_bs, \
|
||||
shuffle=True, num_workers=4, drop_last=True)
|
||||
|
||||
if prep_config["test_10crop"]:
|
||||
dsets["test"] = [ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"][i], root_folder=root_folder, ratios=config["ratios_test"]) for i in range(10)]
|
||||
dset_loaders["test"] = [DataLoader(dset, batch_size=test_bs, \
|
||||
shuffle=False, num_workers=4) for dset in dsets['test']]
|
||||
else:
|
||||
dsets["test"] = ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"], root_folder=root_folder, ratios=config["ratios_test"])
|
||||
dset_loaders["test"] = DataLoader(dsets["test"], batch_size=test_bs, \
|
||||
shuffle=False, num_workers=4)
|
||||
|
||||
test_path = os.path.join(root_folder, data_config["test"]["dataset_path"])
|
||||
if os.path.exists(test_path):
|
||||
print('Found existing dataset for test', flush=True)
|
||||
with open(test_path, 'rb') as f:
|
||||
[test_samples, test_labels] = pickle.load(f)
|
||||
test_labels = torch.LongTensor(test_labels).to(config["device"])
|
||||
else:
|
||||
print('Missing test dataset', flush=True)
|
||||
print('Building dataset for test and writing to {}'.format(
|
||||
test_path), flush=True)
|
||||
if prep_config["test_10crop"]:
|
||||
dsets_test = [ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"][i], root_folder=root_folder) for i in range(10)]
|
||||
loaded_dsets_test = [LoadedImageList(
|
||||
dset_test) for dset_test in dsets_test]
|
||||
test_samples, test_labels = [loaded_dset_test.samples.numpy() for loaded_dset_test in loaded_dsets_test], \
|
||||
[loaded_dset_test.targets.numpy()
|
||||
for loaded_dset_test in loaded_dsets_test]
|
||||
with open(test_path, 'wb') as f:
|
||||
pickle.dump([test_samples, test_labels], f)
|
||||
else:
|
||||
dset_test = ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"], root_folder=root_folder, ratios=config['ratios_test'])
|
||||
loaded_dset_test = LoadedImageList(dset_test)
|
||||
test_samples, test_labels = loaded_dset_test.samples.numpy(), loaded_dset_test.targets.numpy()
|
||||
with open(test_path, 'wb') as f:
|
||||
pickle.dump([test_samples, test_labels], f)
|
||||
|
||||
class_num = config["network"]["params"]["class_num"]
|
||||
test_samples, test_labels = sample_ratios(
|
||||
test_samples, test_labels, config['ratios_test'])
|
||||
|
||||
# compute labels distribution on the source and target domain
|
||||
source_label_distribution = np.zeros((class_num))
|
||||
for img in dsets["source"].imgs:
|
||||
source_label_distribution[img[1]] += 1
|
||||
print("Total source samples: {}".format(np.sum(source_label_distribution)), flush=True)
|
||||
print("Source samples per class: {}".format(source_label_distribution), flush=True)
|
||||
source_label_distribution /= np.sum(source_label_distribution)
|
||||
print("Source label distribution: {}".format(source_label_distribution), flush=True)
|
||||
target_label_distribution = np.zeros((class_num))
|
||||
for img in dsets["target"].imgs:
|
||||
target_label_distribution[img[1]] += 1
|
||||
print("Total target samples: {}".format(
|
||||
np.sum(target_label_distribution)), flush=True)
|
||||
print("Target samples per class: {}".format(target_label_distribution), flush=True)
|
||||
target_label_distribution /= np.sum(target_label_distribution)
|
||||
print("Target label distribution: {}".format(target_label_distribution), flush=True)
|
||||
mixture = (source_label_distribution + target_label_distribution) / 2
|
||||
jsd = (scipy.stats.entropy(source_label_distribution, qk=mixture) \
|
||||
+ scipy.stats.entropy(target_label_distribution, qk=mixture)) / 2
|
||||
print("JSD : {}".format(jsd), flush=True)
|
||||
|
||||
test_label_distribution = np.zeros((class_num))
|
||||
for img in test_labels:
|
||||
test_label_distribution[int(img.item())] += 1
|
||||
print("Test samples per class: {}".format(test_label_distribution), flush=True)
|
||||
test_label_distribution /= np.sum(test_label_distribution)
|
||||
print("Test label distribution: {}".format(test_label_distribution), flush=True)
|
||||
write_list(config["out_wei_file"], [round(x, 4) for x in test_label_distribution])
|
||||
write_list(config["out_wei_file"], [round(x, 4) for x in source_label_distribution])
|
||||
write_list(config["out_wei_file"], [round(x, 4) for x in target_label_distribution])
|
||||
true_weights = torch.tensor(
|
||||
target_label_distribution / source_label_distribution, dtype=torch.float, requires_grad=False)[:, None].to(config["device"])
|
||||
print("True weights : {}".format(true_weights[:, 0].cpu().numpy()))
|
||||
config["out_wei_file"].write(str(jsd) + "\n")
|
||||
|
||||
## set base network
|
||||
net_config = config["network"]
|
||||
base_network = net_config["name"](**net_config["params"])
|
||||
base_network = base_network.to(config["device"])
|
||||
|
||||
## add additional network for some methods
|
||||
if config["loss"]["random"]:
|
||||
random_layer = network.RandomLayer([base_network.output_num(), class_num], config["loss"]["random_dim"])
|
||||
ad_net = network.AdversarialNetwork(config["loss"]["random_dim"], 1024)
|
||||
else:
|
||||
random_layer = None
|
||||
if 'CDAN' in config['method']:
|
||||
ad_net = network.AdversarialNetwork(base_network.output_num() * class_num, 1024)
|
||||
else:
|
||||
ad_net = network.AdversarialNetwork(base_network.output_num(), 1024)
|
||||
if config["loss"]["random"]:
|
||||
random_layer.to(config["device"])
|
||||
ad_net = ad_net.to(config["device"])
|
||||
parameter_list = ad_net.get_parameters() + base_network.get_parameters()
|
||||
parameter_list[-1]["lr_mult"] = config["lr_mult_im"]
|
||||
|
||||
## set optimizer
|
||||
optimizer_config = config["optimizer"]
|
||||
optimizer = optimizer_config["type"](parameter_list, \
|
||||
**(optimizer_config["optim_params"]))
|
||||
param_lr = []
|
||||
for param_group in optimizer.param_groups:
|
||||
param_lr.append(param_group["lr"])
|
||||
schedule_param = optimizer_config["lr_param"]
|
||||
lr_scheduler = lr_schedule.schedule_dict[optimizer_config["lr_type"]]
|
||||
|
||||
# Maintain two quantities for the QP.
|
||||
cov_mat = torch.tensor(np.zeros((class_num, class_num), dtype=np.float32),
|
||||
requires_grad=False).to(config["device"])
|
||||
pseudo_target_label = torch.tensor(np.zeros((class_num, 1), dtype=np.float32),
|
||||
requires_grad=False).to(config["device"])
|
||||
# Maintain one weight vector for BER.
|
||||
class_weights = torch.tensor(
|
||||
1.0 / source_label_distribution, dtype=torch.float, requires_grad=False).to(config["device"])
|
||||
|
||||
gpus = config['gpu'].split(',')
|
||||
if len(gpus) > 1:
|
||||
ad_net = nn.DataParallel(ad_net, device_ids=[int(i) for i in gpus])
|
||||
base_network = nn.DataParallel(base_network, device_ids=[int(i) for i in gpus])
|
||||
|
||||
## train
|
||||
len_train_source = len(dset_loaders["source"])
|
||||
len_train_target = len(dset_loaders["target"])
|
||||
transfer_loss_value = classifier_loss_value = total_loss_value = 0.0
|
||||
best_acc = 0.0
|
||||
|
||||
print("Preparations done in {:.0f} seconds".format(time.time() - start_time), flush=True)
|
||||
print("Starting training for {} iterations using method {}".format(config["num_iterations"], config['method']), flush=True)
|
||||
start_time_test = start_time = time.time()
|
||||
for i in range(config["num_iterations"]):
|
||||
if i % config["test_interval"] == config["test_interval"] - 1:
|
||||
base_network.train(False)
|
||||
temp_acc = image_classification_test_loaded(test_samples, test_labels, base_network, test_10crop=prep_config["test_10crop"])
|
||||
temp_model = nn.Sequential(base_network)
|
||||
if temp_acc > best_acc:
|
||||
best_acc = temp_acc
|
||||
log_str = " iter: {:05d}, sec: {:.0f}, class: {:.5f}, da: {:.5f}, precision: {:.5f}".format(
|
||||
i, time.time() - start_time_test, classifier_loss_value, transfer_loss_value, temp_acc)
|
||||
config["out_log_file"].write(log_str+"\n")
|
||||
config["out_log_file"].flush()
|
||||
print(log_str, flush=True)
|
||||
if 'IW' in config['method']:
|
||||
current_weights = [round(x, 4) for x in base_network.im_weights.data.cpu().numpy().flatten()]
|
||||
# write_list(config["out_wei_file"], current_weights)
|
||||
print(current_weights, flush=True)
|
||||
start_time_test = time.time()
|
||||
if i % 500 == -1:
|
||||
print("{} iterations in {} seconds".format(i, time.time() - start_time), flush=True)
|
||||
|
||||
loss_params = config["loss"]
|
||||
## train one iter
|
||||
base_network.train(True)
|
||||
ad_net.train(True)
|
||||
optimizer = lr_scheduler(optimizer, i, **schedule_param)
|
||||
optimizer.zero_grad()
|
||||
|
||||
t = time.time()
|
||||
if i % len_train_source == 0:
|
||||
iter_source = iter(dset_loaders["source"])
|
||||
if i % len_train_target == 0:
|
||||
iter_target = iter(dset_loaders["target"])
|
||||
inputs_source, label_source = iter_source.next()
|
||||
inputs_target, _ = iter_target.next()
|
||||
inputs_source, inputs_target, label_source = inputs_source.to(config["device"]), inputs_target.to(config["device"]), label_source.to(config["device"])
|
||||
features_source, outputs_source = base_network(inputs_source)
|
||||
features_target, outputs_target = base_network(inputs_target)
|
||||
features = torch.cat((features_source, features_target), dim=0)
|
||||
outputs = torch.cat((outputs_source, outputs_target), dim=0)
|
||||
softmax_out = nn.Softmax(dim=1)(outputs)
|
||||
|
||||
if 'IW' in config['method']:
|
||||
ys_onehot = torch.zeros(train_bs, class_num).to(config["device"])
|
||||
ys_onehot.scatter_(1, label_source.view(-1, 1), 1)
|
||||
|
||||
# Compute weights on source data.
|
||||
if 'ORACLE' in config['method']:
|
||||
weights = torch.mm(ys_onehot, true_weights)
|
||||
else:
|
||||
weights = torch.mm(ys_onehot, base_network.im_weights)
|
||||
|
||||
source_preds, target_preds = outputs[:train_bs], outputs[train_bs:]
|
||||
# Compute the aggregated distribution of pseudo-label on the target domain.
|
||||
pseudo_target_label += torch.sum(
|
||||
F.softmax(target_preds, dim=1), dim=0).view(-1, 1).detach()
|
||||
# Update the covariance matrix on the source domain as well.
|
||||
cov_mat += torch.mm(F.softmax(source_preds,
|
||||
dim=1).transpose(1, 0), ys_onehot).detach()
|
||||
|
||||
if config['method'] == 'CDAN-E':
|
||||
classifier_loss = nn.CrossEntropyLoss()(outputs_source, label_source)
|
||||
entropy = loss.Entropy(softmax_out)
|
||||
transfer_loss = loss.CDAN([features, softmax_out], ad_net, entropy, network.calc_coeff(i), random_layer)
|
||||
total_loss = loss_params["trade_off"] * \
|
||||
transfer_loss + classifier_loss
|
||||
|
||||
elif 'IWCDAN-E' in config['method']:
|
||||
|
||||
classifier_loss = torch.mean(
|
||||
nn.CrossEntropyLoss(weight=class_weights, reduction='none')
|
||||
(outputs_source, label_source) * weights) / class_num
|
||||
|
||||
entropy = loss.Entropy(softmax_out)
|
||||
transfer_loss = loss.CDAN(
|
||||
[features, softmax_out], ad_net, entropy, network.calc_coeff(i), random_layer, weights=weights, device=config["device"])
|
||||
total_loss = loss_params["trade_off"] * \
|
||||
transfer_loss + classifier_loss
|
||||
|
||||
elif config['method'] == 'CDAN':
|
||||
|
||||
classifier_loss = nn.CrossEntropyLoss()(outputs_source, label_source)
|
||||
transfer_loss = loss.CDAN([features, softmax_out], ad_net, None, None, random_layer)
|
||||
total_loss = loss_params["trade_off"] * transfer_loss + classifier_loss
|
||||
|
||||
elif 'IWCDAN' in config['method']:
|
||||
|
||||
classifier_loss = torch.mean(
|
||||
nn.CrossEntropyLoss(weight=class_weights, reduction='none')
|
||||
(outputs_source, label_source) * weights) / class_num
|
||||
|
||||
transfer_loss = loss.CDAN([features, softmax_out], ad_net, None, None, random_layer, weights=weights)
|
||||
total_loss = loss_params["trade_off"] * \
|
||||
transfer_loss + classifier_loss
|
||||
|
||||
elif config['method'] == 'DANN':
|
||||
classifier_loss = nn.CrossEntropyLoss()(outputs_source, label_source)
|
||||
transfer_loss = loss.DANN(features, ad_net, config["device"])
|
||||
total_loss = loss_params["trade_off"] * \
|
||||
transfer_loss + classifier_loss
|
||||
|
||||
elif 'IWDAN' in config['method']:
|
||||
|
||||
classifier_loss = torch.mean(
|
||||
nn.CrossEntropyLoss(weight=class_weights, reduction='none')
|
||||
(outputs_source, label_source) * weights) / class_num
|
||||
|
||||
transfer_loss = loss.IWDAN(features, ad_net, weights)
|
||||
total_loss = loss_params["trade_off"] * \
|
||||
transfer_loss + classifier_loss
|
||||
|
||||
elif config['method'] == 'NANN':
|
||||
classifier_loss = nn.CrossEntropyLoss()(outputs_source, label_source)
|
||||
total_loss = classifier_loss
|
||||
else:
|
||||
raise ValueError('Method cannot be recognized.')
|
||||
|
||||
total_loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
transfer_loss_value = 0 if config['method'] == 'NANN' else transfer_loss.item()
|
||||
classifier_loss_value = classifier_loss.item()
|
||||
total_loss_value = transfer_loss_value + classifier_loss_value
|
||||
|
||||
if ('IW' in config['method']) and i % (config["dataset_mult_iw"] * len_train_source) == config["dataset_mult_iw"] * len_train_source - 1:
|
||||
|
||||
pseudo_target_label /= train_bs * \
|
||||
len_train_source * config["dataset_mult_iw"]
|
||||
cov_mat /= train_bs * len_train_source * config["dataset_mult_iw"]
|
||||
print(i, np.sum(cov_mat.cpu().detach().numpy()), train_bs * len_train_source)
|
||||
|
||||
# Recompute the importance weight by solving a QP.
|
||||
base_network.im_weights_update(source_label_distribution,
|
||||
pseudo_target_label.cpu().detach().numpy(),
|
||||
cov_mat.cpu().detach().numpy(),
|
||||
config["device"])
|
||||
current_weights = [
|
||||
round(x, 4) for x in base_network.im_weights.data.cpu().numpy().flatten()]
|
||||
write_list(config["out_wei_file"], [np.linalg.norm(
|
||||
current_weights - true_weights.cpu().numpy().flatten())] + current_weights)
|
||||
print(np.linalg.norm(current_weights -
|
||||
true_weights.cpu().numpy().flatten()), current_weights)
|
||||
|
||||
cov_mat[:] = 0.0
|
||||
pseudo_target_label[:] = 0.0
|
||||
|
||||
return best_acc
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Conditional Domain Adversarial Network')
|
||||
parser.add_argument('method', type=str, choices=[
|
||||
'NANN', 'DANN', 'IWDAN', 'IWDANORACLE', 'CDAN', 'IWCDAN', 'IWCDANORACLE', 'CDAN-E', 'IWCDAN-E', 'IWCDAN-EORACLE'])
|
||||
parser.add_argument('--gpu_id', type=str, nargs='?', default='0', help="device id to run")
|
||||
parser.add_argument('--net', type=str, default='ResNet50', choices=["ResNet18", "ResNet34", "ResNet50", "ResNet101", "ResNet152", "VGG11", "VGG13", "VGG16", "VGG19", "VGG11BN", "VGG13BN", "VGG16BN", "VGG19BN", "AlexNet"], help="Network type. Only tested with ResNet50")
|
||||
parser.add_argument('--dset', type=str, default='office-31', choices=['office-31', 'visda', 'office-home'], help="The dataset or source dataset used")
|
||||
parser.add_argument('--s_dset_file', type=str, default='amazon_list.txt', help="The source dataset path list")
|
||||
parser.add_argument('--t_dset_file', type=str, default='webcam_list.txt', help="The target dataset path list")
|
||||
parser.add_argument('--test_interval', type=int, default=500, help="interval of two continuous test phase")
|
||||
parser.add_argument('--snapshot_interval', type=int, default=10000, help="interval of two continuous output model")
|
||||
parser.add_argument('--output_dir', type=str, default='results', help="output directory")
|
||||
parser.add_argument('--root_folder', type=str, default=None, help="The folder containing the datasets")
|
||||
parser.add_argument('--lr', type=float, default=0.001,
|
||||
help="learning rate")
|
||||
parser.add_argument('--trade_off', type=float, default=1.0, help="factor for dann")
|
||||
parser.add_argument('--random', type=bool, default=False, help="whether use random projection")
|
||||
parser.add_argument('--seed', type=int, default='42', help="Random seed")
|
||||
parser.add_argument('--lr_mult_im', type=int, default='1', help="Multiplicative factor for IM")
|
||||
parser.add_argument('--dataset_mult_iw', type=int, default='0', help="Frequency of weight updates in multiples of the dataset. Default: 1 for digits and visda, 15 for office datasets")
|
||||
parser.add_argument('--num_iterations', type=int, default='100000', help="Number of batch updates")
|
||||
parser.add_argument('--ratio', type=int, default=0, help='ratio option. If 0 original dataset, if 1, only 30% of samples in the first half of the classes are considered')
|
||||
parser.add_argument('--ma', type=float, default=0.5,
|
||||
help='weight for the moving average of iw')
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.root_folder is None:
|
||||
args.root_folder = 'data/{}/'.format(args.dset)
|
||||
|
||||
if args.s_dset_file != args.t_dset_file:
|
||||
# Set GPU ID
|
||||
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id
|
||||
|
||||
# Set random number seed.
|
||||
np.random.seed(args.seed)
|
||||
torch.manual_seed(args.seed)
|
||||
|
||||
# train config
|
||||
config = {}
|
||||
config['method'] = args.method
|
||||
config["gpu"] = args.gpu_id
|
||||
config["device"] = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
||||
config["num_iterations"] = args.num_iterations
|
||||
config["test_interval"] = args.test_interval
|
||||
config["snapshot_interval"] = args.snapshot_interval
|
||||
config["output_for_test"] = True
|
||||
config["output_path"] = args.output_dir
|
||||
if not osp.exists(config["output_path"]):
|
||||
os.system('mkdir -p '+ config["output_path"])
|
||||
config["out_log_file"] = open(osp.join(config["output_path"], "log.txt"), "w")
|
||||
config["out_wei_file"] = open(osp.join(config["output_path"], "log_weights.txt"), "w")
|
||||
if not osp.exists(config["output_path"]):
|
||||
os.mkdir(config["output_path"])
|
||||
|
||||
config["prep"] = {"test_10crop":False, 'params':{"resize_size":256, "crop_size":224, 'alexnet':False}}
|
||||
config["loss"] = {"trade_off":args.trade_off}
|
||||
if "AlexNet" in args.net:
|
||||
config["prep"]['params']['alexnet'] = True
|
||||
config["prep"]['params']['crop_size'] = 227
|
||||
config["network"] = {"name":network.AlexNetFc, \
|
||||
"params":{"use_bottleneck":True, "bottleneck_dim":256, "new_cls":True, "ma": args.ma} }
|
||||
elif "ResNet" in args.net:
|
||||
config["network"] = {"name":network.ResNetFc, \
|
||||
"params":{"resnet_name":args.net, "use_bottleneck":True, "bottleneck_dim":256, "new_cls":True, "ma": args.ma} }
|
||||
elif "VGG" in args.net:
|
||||
config["network"] = {"name":network.VGGFc, \
|
||||
"params":{"vgg_name":args.net, "use_bottleneck":True, "bottleneck_dim":256, "new_cls":True, "ma": args.ma} }
|
||||
config["loss"]["random"] = args.random
|
||||
config["loss"]["random_dim"] = 1024
|
||||
|
||||
config["optimizer"] = {"type":optim.SGD, "optim_params":{'lr':args.lr, "momentum":0.9, \
|
||||
"weight_decay":0.0005, "nesterov":True}, "lr_type":"inv", \
|
||||
"lr_param":{"lr":args.lr, "gamma":0.001, "power":0.75} }
|
||||
|
||||
config["dataset"] = args.dset
|
||||
config["data"] = {"source":{"list_path":args.s_dset_file, "batch_size":36}, \
|
||||
"target":{"list_path":args.t_dset_file, "batch_size":36}, \
|
||||
"test": {"list_path": args.t_dset_file, "dataset_path": "{}_test.pkl".format(args.t_dset_file), "batch_size": 4},
|
||||
"root_folder":args.root_folder}
|
||||
|
||||
config["lr_mult_im"] = args.lr_mult_im
|
||||
if config["dataset"] == "office-31":
|
||||
if ("amazon" in args.s_dset_file and "webcam" in args.t_dset_file) or \
|
||||
("webcam" in args.s_dset_file and "dslr" in args.t_dset_file) or \
|
||||
("webcam" in args.s_dset_file and "amazon" in args.t_dset_file) or \
|
||||
("dslr" in args.s_dset_file and "amazon" in args.t_dset_file):
|
||||
config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters
|
||||
elif ("amazon" in args.s_dset_file and "dslr" in args.t_dset_file) or \
|
||||
("dslr" in args.s_dset_file and "webcam" in args.t_dset_file):
|
||||
config["optimizer"]["lr_param"]["lr"] = 0.0003 # optimal parameters
|
||||
config["network"]["params"]["class_num"] = 31
|
||||
config["ratios_source"] = [1] * 31
|
||||
if args.ratio == 1:
|
||||
config["ratios_source"] = [0.3] * 15 + [1] * 16
|
||||
config["ratios_target"] = [1] * 31
|
||||
if args.dataset_mult_iw == 0:
|
||||
args.dataset_mult_iw = 15
|
||||
elif config["dataset"] == "visda":
|
||||
config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters
|
||||
config["network"]["params"]["class_num"] = 12
|
||||
config["ratios_source"] = [1] * 12
|
||||
if args.ratio == 1:
|
||||
config["ratios_source"] = [0.3] * 6 + [1] * 6
|
||||
config["ratios_target"] = [1] * 12
|
||||
if args.dataset_mult_iw == 0:
|
||||
args.dataset_mult_iw = 1
|
||||
elif config["dataset"] == "office-home":
|
||||
config["optimizer"]["lr_param"]["lr"] = 0.001 # optimal parameters
|
||||
config["network"]["params"]["class_num"] = 65
|
||||
config["ratios_source"] = [1] * 65
|
||||
if args.ratio == 1:
|
||||
config["ratios_source"] = [0.3] * 32 + [1] * 33
|
||||
config["ratios_target"] = [1] * 65
|
||||
if args.dataset_mult_iw == 0:
|
||||
args.dataset_mult_iw = 15
|
||||
else:
|
||||
raise ValueError('Dataset cannot be recognized. Please define your own dataset here.')
|
||||
|
||||
config["dataset_mult_iw"] = args.dataset_mult_iw
|
||||
config["ratios_test"] = config["ratios_target"]
|
||||
config["out_log_file"].write(str(config) + "\n")
|
||||
config["out_log_file"].flush()
|
||||
|
||||
print("-" * 50, flush=True)
|
||||
print("\nRunning {} on the {} dataset with source {} and target {} and trade off {}\n".format(args.method, args.dset,args.s_dset_file, args.t_dset_file, args.trade_off), flush=True )
|
||||
print("-" * 50, flush=True)
|
||||
train(config)
|
|
@ -0,0 +1,415 @@
|
|||
import argparse
|
||||
import os, sys
|
||||
import os.path as osp
|
||||
import pickle
|
||||
import numpy as np
|
||||
import scipy.stats
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.optim as optim
|
||||
import network
|
||||
import loss
|
||||
import pre_process as prep
|
||||
from torch.utils.data import DataLoader
|
||||
import torch.nn.functional as F
|
||||
import lr_schedule
|
||||
from data_list import ImageList, LoadedImageList, sample_ratios, image_classification_test_loaded, write_list
|
||||
from torch.autograd import Variable
|
||||
|
||||
|
||||
optim_dict = {"SGD": optim.SGD}
|
||||
|
||||
|
||||
class AverageMeter(object):
|
||||
"""Computes and stores the average and current value"""
|
||||
def __init__(self):
|
||||
self.reset()
|
||||
|
||||
def reset(self):
|
||||
self.val = 0
|
||||
self.avg = 0
|
||||
self.sum = 0
|
||||
self.count = 0
|
||||
|
||||
def update(self, val, n=1):
|
||||
self.val = val
|
||||
self.sum += val * n
|
||||
self.count += n
|
||||
self.avg = self.sum // self.count
|
||||
|
||||
|
||||
def transfer_classification(config):
|
||||
|
||||
use_gpu = torch.cuda.is_available()
|
||||
device = 'cuda' if use_gpu else 'cpu'
|
||||
|
||||
## set pre-process
|
||||
prep_dict = {}
|
||||
prep_config = config["prep"]
|
||||
prep_dict["source"] = prep.image_train(**prep_config['params'])
|
||||
prep_dict["target"] = prep.image_train(**prep_config['params'])
|
||||
if prep_config["test_10crop"]:
|
||||
prep_dict["test"] = prep.image_test_10crop(**prep_config['params'])
|
||||
else:
|
||||
prep_dict["test"] = prep.image_test(**prep_config['params'])
|
||||
|
||||
## set loss
|
||||
class_criterion = nn.CrossEntropyLoss()
|
||||
loss_config = config["loss"]
|
||||
transfer_criterion = loss.loss_dict[loss_config["name"]]
|
||||
if "params" not in loss_config:
|
||||
loss_config["params"] = {}
|
||||
|
||||
## prepare data
|
||||
print("Preparing data", flush=True)
|
||||
dsets = {}
|
||||
dset_loaders = {}
|
||||
data_config = config["data"]
|
||||
train_bs = data_config["source"]["batch_size"]
|
||||
test_bs = data_config["test"]["batch_size"]
|
||||
root_folder = data_config["root_folder"]
|
||||
dsets["source"] = ImageList(open(osp.join(root_folder, data_config["source"]["list_path"])).readlines(),
|
||||
transform=prep_dict["source"], root_folder=root_folder, ratios=config["ratios_source"], mode=prep_config['mode'])
|
||||
dset_loaders["source"] = DataLoader(dsets["source"], batch_size=train_bs,
|
||||
shuffle=True, num_workers=4, drop_last=True)
|
||||
dsets["target"] = ImageList(open(osp.join(root_folder, data_config["target"]["list_path"])).readlines(),
|
||||
transform=prep_dict["target"], root_folder=root_folder, ratios=config["ratios_target"], mode=prep_config['mode'])
|
||||
dset_loaders["target"] = DataLoader(dsets["target"], batch_size=train_bs,
|
||||
shuffle=True, num_workers=4, drop_last=True)
|
||||
|
||||
if prep_config["test_10crop"]:
|
||||
dsets["test"] = [ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"][i], root_folder=root_folder, ratios=config["ratios_test"], mode=prep_config['mode']) for i in range(10)]
|
||||
dset_loaders["test"] = [DataLoader(dset, batch_size=test_bs,
|
||||
shuffle=False, num_workers=4) for dset in dsets['test']]
|
||||
else:
|
||||
dsets["test"] = ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"], root_folder=root_folder, ratios=config["ratios_test"], mode=prep_config['mode'])
|
||||
dset_loaders["test"] = DataLoader(dsets["test"], batch_size=test_bs,
|
||||
shuffle=False, num_workers=4)
|
||||
|
||||
test_path = os.path.join(root_folder, data_config["test"]["dataset_path"])
|
||||
if os.path.exists(test_path):
|
||||
print('Found existing dataset for test', flush=True)
|
||||
with open(test_path, 'rb') as f:
|
||||
[test_samples, test_labels] = pickle.load(f)
|
||||
test_labels = torch.LongTensor(test_labels).to(device)
|
||||
else:
|
||||
print('Missing test dataset', flush=True)
|
||||
print('Building dataset for test and writing to {}'.format(test_path), flush=True)
|
||||
if prep_config["test_10crop"]:
|
||||
dsets_test = [ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"][i], root_folder=root_folder) for i in range(10)]
|
||||
loaded_dsets_test = [LoadedImageList(
|
||||
dset_test) for dset_test in dsets_test]
|
||||
test_samples, test_labels = [loaded_dset_test.samples.numpy() for loaded_dset_test in loaded_dsets_test], \
|
||||
[loaded_dset_test.targets.numpy()
|
||||
for loaded_dset_test in loaded_dsets_test]
|
||||
with open(test_path, 'wb') as f:
|
||||
pickle.dump([test_samples, test_labels], f)
|
||||
else:
|
||||
dset_test = ImageList(open(osp.join(root_folder, data_config["test"]["list_path"])).readlines(),
|
||||
transform=prep_dict["test"], root_folder=root_folder, ratios=config['ratios_test'])
|
||||
loaded_dset_test = LoadedImageList(dset_test)
|
||||
test_samples, test_labels = loaded_dset_test.samples.numpy(
|
||||
), loaded_dset_test.targets.numpy()
|
||||
with open(test_path, 'wb') as f:
|
||||
pickle.dump([test_samples, test_labels], f)
|
||||
|
||||
class_num = config["network"]["class_num"]
|
||||
test_samples, test_labels = sample_ratios(
|
||||
test_samples, test_labels, config['ratios_test'])
|
||||
|
||||
## set base network
|
||||
net_config = config["network"]
|
||||
base_network = network.network_dict[net_config["name"]](**net_config)
|
||||
base_network = base_network.to(device)
|
||||
if net_config["use_bottleneck"]:
|
||||
bottleneck_layer = nn.Linear(base_network.output_num(), net_config["bottleneck_dim"]).to(device)
|
||||
classifier_layer = nn.Linear(bottleneck_layer.out_features, class_num)
|
||||
else:
|
||||
classifier_layer = nn.Linear(base_network.output_num(), class_num)
|
||||
for param in base_network.parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
classifier_layer = classifier_layer.to(device)
|
||||
|
||||
## initialization
|
||||
if net_config["use_bottleneck"]:
|
||||
bottleneck_layer.weight.data.normal_(0, 0.005)
|
||||
bottleneck_layer.bias.data.fill_(0.1)
|
||||
bottleneck_layer = nn.Sequential(bottleneck_layer, nn.ReLU(), nn.Dropout(0.5))
|
||||
classifier_layer.weight.data.normal_(0, 0.01)
|
||||
classifier_layer.bias.data.fill_(0.0)
|
||||
|
||||
## collect parameters
|
||||
if net_config["use_bottleneck"]:
|
||||
parameter_list = [{"params":bottleneck_layer.parameters(), "lr":10}, {"params":classifier_layer.parameters(), "lr":10}]
|
||||
|
||||
else:
|
||||
parameter_list = [{"params":classifier_layer.parameters(), "lr":10}]
|
||||
|
||||
# compute labels distribution on the source and target domain
|
||||
source_label_distribution = np.zeros((class_num))
|
||||
for img in dsets["source"].imgs:
|
||||
source_label_distribution[img[1]] += 1
|
||||
print("Total source samples: {}".format(
|
||||
np.sum(source_label_distribution)), flush=True)
|
||||
print("Source samples per class: {}".format(
|
||||
source_label_distribution), flush=True)
|
||||
source_label_distribution /= np.sum(source_label_distribution)
|
||||
print("Source label distribution: {}".format(
|
||||
source_label_distribution), flush=True)
|
||||
target_label_distribution = np.zeros((class_num))
|
||||
for img in dsets["target"].imgs:
|
||||
target_label_distribution[img[1]] += 1
|
||||
print("Total target samples: {}".format(
|
||||
np.sum(target_label_distribution)), flush=True)
|
||||
print("Target samples per class: {}".format(
|
||||
target_label_distribution), flush=True)
|
||||
target_label_distribution /= np.sum(target_label_distribution)
|
||||
print("Target label distribution: {}".format(
|
||||
target_label_distribution), flush=True)
|
||||
mixture = (source_label_distribution + target_label_distribution) / 2
|
||||
jsd = (scipy.stats.entropy(source_label_distribution, qk=mixture)
|
||||
+ scipy.stats.entropy(target_label_distribution, qk=mixture)) / 2
|
||||
print("JSD : {}".format(jsd), flush=True)
|
||||
true_weights = torch.tensor(
|
||||
target_label_distribution / source_label_distribution, dtype=torch.float, requires_grad=False)[:, None].to(device)
|
||||
write_list(config["out_wei_file"], [round(x, 4) for x in source_label_distribution])
|
||||
write_list(config["out_wei_file"], [round(x, 4) for x in target_label_distribution])
|
||||
print("True weights : {}".format(true_weights[:, 0].cpu().numpy()))
|
||||
config["out_wei_file"].write(str(jsd) + "\n")
|
||||
|
||||
## set optimizer
|
||||
optimizer_config = config["optimizer"]
|
||||
optimizer = optim_dict[optimizer_config["type"]](parameter_list, **(optimizer_config["optim_params"]))
|
||||
param_lr = []
|
||||
for param_group in optimizer.param_groups:
|
||||
param_lr.append(param_group["lr"])
|
||||
schedule_param = optimizer_config["lr_param"]
|
||||
lr_scheduler = lr_schedule.schedule_dict[optimizer_config["lr_type"]]
|
||||
|
||||
# Maintain two quantities for the QP.
|
||||
cov_mat = torch.tensor(np.zeros((class_num, class_num), dtype=np.float32),
|
||||
requires_grad=False).to(device)
|
||||
pseudo_target_label = torch.tensor(np.zeros((class_num, 1), dtype=np.float32),
|
||||
requires_grad=False).to(device)
|
||||
# Maintain one weight vector for BER.
|
||||
class_weights = torch.tensor(
|
||||
1.0 / source_label_distribution, dtype=torch.float, requires_grad=False).to(device)
|
||||
|
||||
## train
|
||||
len_train_source = len(dset_loaders["source"])
|
||||
len_train_target = len(dset_loaders["target"])
|
||||
mmd_meter = AverageMeter()
|
||||
for i in range(config["num_iterations"]):
|
||||
## test in the train
|
||||
if i % config["test_interval"] == 1:
|
||||
base_network.train(False)
|
||||
classifier_layer.train(False)
|
||||
if net_config["use_bottleneck"]:
|
||||
bottleneck_layer.train(False)
|
||||
test_acc = image_classification_test_loaded(
|
||||
test_samples, test_labels, nn.Sequential(
|
||||
base_network, bottleneck_layer, classifier_layer), test_10crop=prep_config["test_10crop"], device=device)
|
||||
else:
|
||||
test_acc = image_classification_test_loaded(
|
||||
test_samples, test_labels, nn.Sequential(
|
||||
base_network, classifier_layer), test_10crop=prep_config["test_10crop"], device=device)
|
||||
|
||||
log_str = 'Iter: %d, mmd = %.4f, test_acc = %.3f' % (
|
||||
i, mmd_meter.avg, test_acc)
|
||||
print(log_str)
|
||||
config["out_log_file"].write(log_str+"\n")
|
||||
config["out_log_file"].flush()
|
||||
mmd_meter.reset()
|
||||
|
||||
## train one iter
|
||||
if net_config["use_bottleneck"]:
|
||||
bottleneck_layer.train(True)
|
||||
classifier_layer.train(True)
|
||||
optimizer = lr_scheduler(param_lr, optimizer, i, **schedule_param)
|
||||
optimizer.zero_grad()
|
||||
if i % len_train_source == 0:
|
||||
iter_source = iter(dset_loaders["source"])
|
||||
if i % len_train_target == 0:
|
||||
iter_target = iter(dset_loaders["target"])
|
||||
inputs_source, labels_source = iter_source.next()
|
||||
inputs_target, labels_target = iter_target.next()
|
||||
|
||||
inputs_source, inputs_target, labels_source = Variable(inputs_source).to(device), Variable(inputs_target).to(device), Variable(labels_source).to(device)
|
||||
|
||||
inputs = torch.cat((inputs_source, inputs_target), dim=0)
|
||||
|
||||
features = base_network(inputs)
|
||||
if net_config["use_bottleneck"]:
|
||||
features = bottleneck_layer(features)
|
||||
|
||||
outputs = classifier_layer(features)
|
||||
|
||||
if 'IW' in loss_config["name"]:
|
||||
ys_onehot = torch.zeros(train_bs, class_num).to(device)
|
||||
ys_onehot.scatter_(1, labels_source.view(-1, 1), 1)
|
||||
|
||||
# Compute weights on source data.
|
||||
if 'ORACLE' in loss_config["name"]:
|
||||
weights = torch.mm(ys_onehot, true_weights)
|
||||
else:
|
||||
weights = torch.mm(ys_onehot, base_network.im_weights)
|
||||
|
||||
source_preds, target_preds = outputs[:train_bs], outputs[train_bs:]
|
||||
# Compute the aggregated distribution of pseudo-label on the target domain.
|
||||
pseudo_target_label += torch.sum(F.softmax(target_preds, dim=1), dim=0).view(-1, 1).detach()
|
||||
# Update the covariance matrix on the source domain as well.
|
||||
cov_mat += torch.mm(F.softmax(source_preds,
|
||||
dim=1).transpose(1, 0), ys_onehot).detach()
|
||||
|
||||
classifier_loss = torch.mean(
|
||||
nn.CrossEntropyLoss(weight=class_weights, reduction='none')
|
||||
(outputs.narrow(0, 0, inputs.size(0)//2), labels_source) * weights) / class_num
|
||||
else:
|
||||
classifier_loss = class_criterion(
|
||||
outputs.narrow(0, 0, inputs.size(0)//2), labels_source)
|
||||
|
||||
## switch between different transfer loss
|
||||
if loss_config["name"] == "DAN" or loss_config["name"] == "DAN_Linear":
|
||||
transfer_loss = transfer_criterion(features.narrow(0, 0, features.size(0)//2), features.narrow(0, features.size(0)//2, features.size(0)//2), **loss_config["params"])
|
||||
elif loss_config["name"] == "JAN" or loss_config["name"] == "JAN_Linear":
|
||||
softmax_out = nn.Softmax(dim=1)(outputs)
|
||||
transfer_loss = transfer_criterion([features.narrow(0, 0, features.size(0)//2), softmax_out.narrow(0, 0, softmax_out.size(0)//2)], [features.narrow(0, features.size(0)//2, features.size(0)//2), softmax_out.narrow(0, softmax_out.size(0)//2, softmax_out.size(0)//2)], **loss_config["params"])
|
||||
elif "IWJAN" in loss_config["name"]:
|
||||
softmax_out = nn.Softmax(dim=1)(outputs)
|
||||
transfer_loss = transfer_criterion([features.narrow(0, 0, features.size(0)//2), softmax_out.narrow(0, 0, softmax_out.size(0)//2)], [features.narrow(0, features.size(0)//2, features.size(0)//2), softmax_out.narrow(0, softmax_out.size(0)//2, softmax_out.size(0)//2)], weights=weights, **loss_config["params"])
|
||||
|
||||
mmd_meter.update(transfer_loss.item(), inputs_source.size(0))
|
||||
total_loss = loss_config["trade_off"] * transfer_loss + classifier_loss
|
||||
total_loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
if ('IW' in loss_config["name"]) and i % (config["dataset_mult_iw"] * len_train_source) == config["dataset_mult_iw"] * len_train_source - 1:
|
||||
|
||||
if i > config["dataset_mult_iw"] * len_train_source - 1:
|
||||
pseudo_target_label /= train_bs * \
|
||||
len_train_source * config["dataset_mult_iw"]
|
||||
cov_mat /= train_bs * len_train_source * config["dataset_mult_iw"]
|
||||
print(i, np.sum(cov_mat.cpu().detach().numpy()),
|
||||
train_bs * len_train_source)
|
||||
|
||||
# Recompute the importance weight by solving a QP.
|
||||
base_network.im_weights_update(source_label_distribution,
|
||||
pseudo_target_label.cpu().detach().numpy(),
|
||||
cov_mat.cpu().detach().numpy(),
|
||||
device)
|
||||
|
||||
current_weights = [
|
||||
round(x, 4) for x in base_network.im_weights.data.cpu().numpy().flatten()]
|
||||
write_list(config["out_wei_file"], [np.linalg.norm(
|
||||
current_weights - true_weights.cpu().numpy().flatten())] + current_weights)
|
||||
print(np.linalg.norm(current_weights -
|
||||
true_weights.cpu().numpy().flatten()), current_weights)
|
||||
|
||||
cov_mat[:] = 0.0
|
||||
pseudo_target_label[:] = 0.0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Transfer Learning')
|
||||
parser.add_argument('method', type=str, help="loss name",
|
||||
choices=['JAN', 'IWJAN', 'IWJANORACLE', 'JAN_Linear', 'DAN', 'DAN_Linear', 'IWJAN_Linear', 'IWJANORACLE_Linear'])
|
||||
parser.add_argument('--gpu_id', type=str, nargs='?', default='0', help="device id to run")
|
||||
parser.add_argument('--dset', type=str, default='office', choices=[
|
||||
'office-31', 'visda', 'office-home'], help="The dataset or source dataset used")
|
||||
parser.add_argument('--s_dset_file', type=str, nargs='?',
|
||||
default='train_list.txt', help="source data")
|
||||
parser.add_argument('--t_dset_file', type=str, nargs='?',
|
||||
default='validation_list.txt', help="target data")
|
||||
parser.add_argument('--trade_off', type=float, nargs='?', default=1.0, help="trade_off")
|
||||
parser.add_argument('--output_dir', type=str, default='results', help="output directory")
|
||||
parser.add_argument('--root_folder', type=str, default=None, help="The folder containing the dataset information")
|
||||
parser.add_argument('--seed', type=int, default='42', help="Random seed")
|
||||
parser.add_argument('--dataset_mult_iw', type=int, default='0',
|
||||
help="Frequency of weight updates in multiples of the dataset")
|
||||
parser.add_argument('--ratio', type=int, default=0, help='ratio option')
|
||||
parser.add_argument('--ma', type=float, default=0.5, help='weight for the moving average of iw')
|
||||
args = parser.parse_args()
|
||||
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id
|
||||
|
||||
if args.root_folder is None:
|
||||
args.root_folder = 'data/{}/'.format(args.dset)
|
||||
|
||||
if args.s_dset_file == args.t_dset_file:
|
||||
sys.exit()
|
||||
|
||||
# Set random number seed.
|
||||
np.random.seed(args.seed)
|
||||
torch.manual_seed(args.seed)
|
||||
|
||||
config = {}
|
||||
config["num_iterations"] = 20000
|
||||
config["test_interval"] = 500
|
||||
config["output_path"] = args.output_dir
|
||||
if not osp.exists(config["output_path"]):
|
||||
os.system('mkdir -p ' + config["output_path"])
|
||||
config["out_log_file"] = open(
|
||||
osp.join(config["output_path"], "log.txt"), "w")
|
||||
config["out_wei_file"] = open(
|
||||
osp.join(config["output_path"], "log_weights.txt"), "w")
|
||||
if not osp.exists(config["output_path"]):
|
||||
os.mkdir(config["output_path"])
|
||||
config["prep"] = {"test_10crop": False, 'params': {"resize_size":256, "crop_size":224, 'alexnet':False}, 'mode':'RGB'}
|
||||
config["loss"] = {"name":args.method, "trade_off":args.trade_off }
|
||||
config["data"] = {"source":{"list_path":args.s_dset_file, "batch_size":36}, \
|
||||
"target":{"list_path":args.t_dset_file, "batch_size":36}, \
|
||||
"test": {"list_path": args.t_dset_file, "dataset_path": "{}_test.pkl".format(args.t_dset_file), "batch_size": 4},
|
||||
"root_folder":args.root_folder}
|
||||
config["network"] = {"name":"ResNet50", "use_bottleneck":True, "bottleneck_dim":256, "ma":args.ma}
|
||||
config["optimizer"] = {"type": "SGD", "optim_params": {"lr": 1.0, "momentum": 0.9, "weight_decay": 0.0005,
|
||||
"nesterov": True}, "lr_type": "inv_mmd", "lr_param": {"gamma": 0.0003, "power": 0.75}}
|
||||
|
||||
config["dataset"] = args.dset
|
||||
|
||||
if config["dataset"] == "office-31":
|
||||
config["optimizer"]["lr_param"]["init_lr"] = 0.0003
|
||||
config["network"]["class_num"] = 31
|
||||
config["ratios_source"] = [1] * 31
|
||||
if args.ratio == 1:
|
||||
config["ratios_source"] = [0.3] * 15 + [1] * 16
|
||||
config["ratios_target"] = [1] * 31
|
||||
config["ratios_test"] = [1] * 31
|
||||
if args.dataset_mult_iw == 0:
|
||||
args.dataset_mult_iw = 15
|
||||
elif config["dataset"] == "visda":
|
||||
config["optimizer"]["lr_param"]["init_lr"] = 0.001
|
||||
config["network"]["class_num"] = 12
|
||||
config["ratios_source"] = [1] * 12
|
||||
if args.ratio == 1:
|
||||
config["ratios_source"] = [0.3] * 6 + [1] * 6
|
||||
config["ratios_target"] = [1] * 12
|
||||
config["ratios_test"] = [1] * 12
|
||||
if args.dataset_mult_iw == 0:
|
||||
args.dataset_mult_iw = 1
|
||||
elif config["dataset"] == "office-home":
|
||||
config["optimizer"]["lr_param"]["init_lr"] = 0.001
|
||||
config["network"]["class_num"] = 65
|
||||
config["ratios_source"] = [1] * 65
|
||||
if args.ratio == 1:
|
||||
config["ratios_source"] = [0.3] * 32 + [1] * 33
|
||||
config["ratios_target"] = [1] * 65
|
||||
config["ratios_test"] = [1] * 65
|
||||
if args.dataset_mult_iw == 0:
|
||||
args.dataset_mult_iw = 15
|
||||
else:
|
||||
raise ValueError(
|
||||
'Dataset cannot be recognized. Please define your own dataset here.')
|
||||
|
||||
config["dataset_mult_iw"] = args.dataset_mult_iw
|
||||
config["out_log_file"].write(str(config) + "\n")
|
||||
config["out_log_file"].flush()
|
||||
|
||||
print("-" * 50, flush=True)
|
||||
print("\nRunning {} on the {} dataset with source {} and target {}\n".format(
|
||||
args.method, args.dset, args.s_dset_file, args.t_dset_file), flush=True)
|
||||
print("-" * 50, flush=True)
|
||||
|
||||
transfer_classification(config)
|
Загрузка…
Ссылка в новой задаче