Граф коммитов

196 Коммитов

Автор SHA1 Сообщение Дата
Debadeepta Dey f7ed8c886f Added analysis script for comparing synflow with synthetic dataset. 2022-12-16 18:21:59 -03:00
Debadeepta Dey 59e8414e4e Zero cost ranking measures nominally integrated into archai. Requires thorough testing. 2022-12-16 18:21:59 -03:00
Debadeepta Dey e1f335cfb3 Synthetic cifar10 dataset now integrated and tested with natsbench tss architectures. 2022-12-16 18:21:59 -03:00
Debadeepta Dey 8009911fb4 Refactored hog + small neural network classification pipeline. 2022-12-16 18:21:40 -03:00
Debadeepta Dey e5ec907259 Added plotting code to display distribution of architectures. 2022-12-16 18:21:40 -03:00
Debadeepta Dey a652c3bcfc More analysis of experiments on nasbench101. 2022-12-16 18:21:39 -03:00
Debadeepta Dey 16a8b37ef5 Make freezeaddon able to take in different layer activations of nasbench101 as features. 2022-12-16 18:21:39 -03:00
Debadeepta Dey 87c6e61abd Nominal implementation of freezetrain network idea. 2022-12-16 18:21:39 -03:00
Debadeepta Dey ca64dc4156 Modified analysis scripts to deal with nb101 as well. 2022-12-16 18:21:36 -03:00
Debadeepta Dey b09c30de01 Verified that analysis script for regular natsbench also works on nb101 reglar eval. 2022-12-16 18:21:24 -03:00
Debadeepta Dey cea8607763 Added main wrapper for nasbench101 2022-12-16 18:21:24 -03:00
Debadeepta Dey 5186db3e28 Created regular nasbench101 evaluation pipeline. 2022-12-16 18:21:24 -03:00
Debadeepta Dey 1091be9b11 Updated with ImageNet16-120 preliminary results. 2022-12-16 18:21:23 -03:00
Debadeepta Dey b5aac7a6b8 Testing imagenet16 dataloader integration. 2022-12-16 18:21:23 -03:00
Debadeepta Dey e56cce2613 Phased freeze training code is tested now. 2022-12-16 18:21:20 -03:00
Debadeepta Dey 6e47ce94c9 cifar100 now works! 2022-12-16 18:21:10 -03:00
Debadeepta Dey 7603941b16 Getting error with running cifar100. 2022-12-16 18:21:07 -03:00
Debadeepta Dey 6a53ce5b67 Got aggregate plots to be decent. 2022-12-16 18:20:42 -03:00
Debadeepta Dey 5835e0d608 Creating scripts for aggregate analysis of experiments. 2022-12-16 18:20:42 -03:00
Debadeepta Dey ddfbfc2a08 Added dumping timing information to regular natsbench evaluation analysis scripts. 2022-12-16 18:20:42 -03:00
Debadeepta Dey 1a67c72ff0 Made proxynas natsbench space yaml freeze trainer have the same params as conditional training for 256 batch size. 2022-12-16 18:20:39 -03:00
Debadeepta Dey ee3343b5aa Added script for making cross experiment plots for proxynas. 2022-12-16 18:20:25 -03:00
Debadeepta Dey 0a64c62925 Added wrapper for main to clobber more arch ids together. 2022-12-16 18:20:08 -03:00
Debadeepta Dey ca98fe94a8 Implemented phased freeze training method from SVCCA paper. 2022-12-16 18:20:05 -03:00
Debadeepta Dey cd93c0e436 Made proxynas analysis script more robust. 2022-12-16 18:19:44 -03:00
Debadeepta Dey 61bd3222da Investigating best_val_top1() bug. 2022-12-16 18:19:44 -03:00
Debadeepta Dey ef0d9e1e03 Getting ready to run large batch cell13 natsbench without augmentation. 2022-12-16 18:19:42 -03:00
Debadeepta Dey dd7ed42c8a Nasbench101 experiments with various ranking mechanisms are now ready. 2022-12-16 18:19:04 -03:00
Debadeepta Dey b8f80e06c9 Updated freezetrain natsbench space training to have same hyperparams as that for fast natsbench regular training. 2022-12-16 16:53:47 -03:00
Debadeepta Dey 8fbdf48438 Added natsbench regular training with hyperparams closely matching as in the paper. 2022-12-16 16:53:47 -03:00
Debadeepta Dey 7a5bc123f8 Prepared analysis script for conditional naswot training. 2022-12-16 16:53:47 -03:00
Debadeepta Dey 999460c787 Fixed bug where batch size in configuration file for naswot score method was not getting utilized. So scoring mechanism was using 96 batch size which is used for regular training. 2022-12-16 16:53:46 -03:00
Debadeepta Dey ef84386e87 Getting ready to run naswot conditional training. 2022-12-16 16:53:46 -03:00
Debadeepta Dey 8791e1bc38 Found good parameters for cell13 onwards receiving gradients on natsbench. 2022-12-16 16:53:46 -03:00
Debadeepta Dey 6085cdf0fa Modified analysis_aggregate.py to load logs in distributed mode. 2022-12-16 16:53:46 -03:00
Debadeepta Dey 1fd43a76c6 Parameter hunt with cell13 onwards getting gradients. 2022-12-16 16:53:46 -03:00
Debadeepta Dey 993ba13a72 Parallelized analysis script for natsbench experiments. 2022-12-16 16:53:46 -03:00
Debadeepta Dey 6eac185832 Found good parameters for freeze training. 2022-12-16 16:53:41 -03:00
Debadeepta Dey a2f30a6268 Prepared analysis of freeze training experiments. 2022-12-16 16:53:29 -03:00
Debadeepta Dey be44bd1e41 Refactored freeze training code and made things cleaner. 2022-12-16 16:53:29 -03:00
Debadeepta Dey 1a7616dca0 Refactored experiment report scripts to account for new ranking metrics. 2022-12-16 16:53:24 -03:00
Debadeepta Dey 409b778e89 Added natsbench for rapid evaluation of architecture ranking proxies. 2022-12-16 16:53:11 -03:00
Debadeepta Dey 2f2aaf7208 Starting to play with natsbench api. 2022-12-16 16:53:11 -03:00
Debadeepta Dey 2453524ca6 Refactoring experiment analysis script. 2022-12-16 16:52:40 -03:00
Debadeepta Dey 9a54ffb594 Added new algo manual_freeze which can train a handcrafted network with freezetrainer. 2022-12-16 16:52:40 -03:00
Debadeepta Dey e5bf0312e5 Still having the issue with freeze training where training error is going up after freeze! 2022-12-16 16:52:40 -03:00
Debadeepta Dey 3cfc771f46 Fixed exprep script to report results report correctly. 2022-12-16 16:52:40 -03:00
Debadeepta Dey 7e99526376 More progress on freeze training. Testing and debugging underway. 2022-12-16 16:52:40 -03:00
Debadeepta Dey e46f0a48b9 LM1B training is currently broken. 2022-12-16 16:45:54 -03:00
Debadeepta Dey 7f74d16e9a Minor. 2022-12-16 16:45:54 -03:00
Debadeepta Dey f0528dfdf8 Made gpt2 flex config have much finer grained d_model range. Added script for visualizing how memory/latency change as we vary a particular dimension, keeping others constant. 2022-12-16 16:45:50 -03:00
Debadeepta Dey 794cffedf3 Added script for analyzing a corpus of fully trained architectures and evaluating proxy measures like decoder params and total params. 2022-12-16 16:45:48 -03:00
Debadeepta Dey 846f6cb2e0 At the end of training, save summaries to work dir. 2022-12-16 16:45:26 -03:00
Debadeepta Dey f14267f997 Minor. 2022-12-16 16:44:31 -03:00
Debadeepta Dey 69095c8959 Made vocab size of gpt2 models 10k. Added analytical check in check_constraints to speed up candidate finding. 2022-12-16 16:43:31 -03:00
Debadeepta Dey 05db6b8e2c Minor fixes. 2022-12-16 16:42:14 -03:00
Debadeepta Dey 003096b431 Added GPT2 launcher. 2022-12-16 16:42:07 -03:00
Debadeepta Dey ad1a2d39f6 Added total params vs memory, latency plots to search. 2022-12-16 16:42:07 -03:00
Debadeepta Dey b8fdea477f Fixed semi-brute force. 2022-12-16 16:42:02 -03:00
Debadeepta Dey 0877ccae4e Further simplified search. 2022-12-16 16:41:48 -03:00
Debadeepta Dey 911f55e84d Minor. 2022-12-16 16:41:47 -03:00
Debadeepta Dey 34f128aadb Further cleanup. 2022-12-16 16:41:46 -03:00
Debadeepta Dey b924c73015 Cleaned up search code. 2022-12-16 16:41:46 -03:00
Debadeepta Dey f199a28433 Minor. 2022-12-16 16:41:18 -03:00
Debadeepta Dey 415215d9df Typo fixes. 2022-12-16 16:40:38 -03:00
Debadeepta Dey 38ba29c3ea Minor edits and comments. 2022-12-16 16:40:29 -03:00
Debadeepta Dey dda6f5e0e2 Added launch.json entry for distributed launch debug target with environment variable for optionally disabling NCCL P2P. 2022-12-16 16:28:39 -03:00
Debadeepta Dey 917daeed56 Added launch target for transformerxl training. 2022-12-16 16:28:34 -03:00
Shital Shah 23005fb5d1 gpt2 training refactor 2022-12-16 16:27:30 -03:00
Shital Shah 5ff4063fc9 gpt2 script runnable 2022-12-16 16:27:26 -03:00
Shital Shah 4c09c3dc98 imagenet tensor shape mismatch fix 2022-12-16 16:24:48 -03:00
Debadeepta Dey 3de997ef12 Fixed settings.json to not have hardcoded pythonpath. 2022-12-16 16:24:46 -03:00
Debadeepta Dey fee9ac86ea Fixed division by zero edge case in petridish sampler. 2022-12-16 16:24:33 -03:00
Shital Shah 31a000eddb remove python path 2022-12-16 16:24:09 -03:00
Debadeepta Dey 8f83a8f545 Fixed typo. 2022-12-16 16:23:58 -03:00
Shital Shah ce6e015e04 delete exp folder, manual run ignores copy files from search 2022-12-16 16:23:11 -03:00
Shital Shah 840a08fe31 post merge sync with refactoring 2022-12-16 16:21:39 -03:00
Ubuntu 232efb4bf5 Now multiple seed models will be trained in petridish distributed. 2022-12-16 16:19:34 -03:00
Ubuntu e4e1591acd Fixed settings by removing pythonpath. 2022-12-16 16:19:31 -03:00
Shital Shah 484cfea190 remove explicit python path 2022-12-16 16:17:49 -03:00
Debadeepta Dey 7579f9bcfd Moved petridish_ray_mock.py file to misc folder. 2022-12-16 16:17:49 -03:00
Debadeepta Dey 7c3e3d4b2c Removed redundant petridish debug launcher. 2022-12-16 16:17:02 -03:00
Shital Shah 30ed18c0f4 Updated .md files 2022-12-16 16:13:54 -03:00
Debadeepta Dey 0bd82db1c5 Fixed gs finalizer. Running some jobs on cluster to baseline. 2022-12-16 16:13:46 -03:00
Shital Shah 78aaddb6e2 max_batches bug fix, added Divnas evall full run config 2022-12-16 16:12:25 -03:00
Shital Shah 57a10a2dac general arch params implementation, support for multiple optimizers 2020-05-18 03:12:44 -07:00
Shital Shah fe8d3efc45 diversenad branch merge 2020-05-18 03:12:44 -07:00
Shital Shah 38f921cded Fix gradual wamup. enable dataset specific toy mode batch sizes, imagenet toy mode working, disable decay_bn for now 2020-05-18 03:12:43 -07:00
Shital Shah 5a4ef14edb fix windows run 2020-05-18 03:12:42 -07:00
Shital Shah cec9c16780 imagenet support for handcrafted models 2020-05-18 03:12:42 -07:00
Shital Shah 0a5d3cf8ac ImageNet toy mode cell outs verified, better readibility in macro builder, move aux tower stride to yaml, prev reduction based on model stems to support imagenet, remove adaptive avg pool from aux tower, remove bn from pool op, converted original darts genotype to archai format 2020-05-18 03:12:42 -07:00
Shital Shah 6720187057 Added ImageNet toy run, multi_step LRs, separate out imagenet folder class, add hue in imagenet aug, module name guess for pre-crafted models, add reduction property in stems, remove imagenet from stem/pool names, combine s4 stem in s2 stem, tune toy.yaml for more realistic toy mode 2020-05-18 03:12:42 -07:00
Shital Shah e40777ef35 resnet eval working in toy 2020-05-18 03:12:41 -07:00
Shital Shah cd6a01a5f4 added eval only test 2020-05-18 03:12:41 -07:00
Shital Shah 0f9f83d59a fix log message, use trainer title in log, 2020-05-18 03:12:41 -07:00
Shital Shah af1d639c6e initial 2020-05-18 03:11:07 -07:00