Go to file
jack-willturner 6fbaee7a6d Final check
2020-06-03 17:10:23 +01:00
config_utils Initial commit 2020-06-03 12:59:01 +01:00
datasets Initial commit 2020-06-03 12:59:01 +01:00
models Initial commit 2020-06-03 12:59:01 +01:00
.gitignore Produce table for multiple Ns 2020-06-03 16:28:45 +01:00
environment.yml Final check 2020-06-03 17:10:23 +01:00
LICENCE Create LICENCE 2020-06-03 15:22:12 +01:00
process_results.py Final check 2020-06-03 17:10:23 +01:00
README.md Final check 2020-06-03 17:10:23 +01:00
reproduce.sh Final check 2020-06-03 17:10:23 +01:00
search.py With table generator 2020-06-03 15:59:48 +01:00

Neural Architecture Search Without Training

IMPORTANT : our codebase relies on use of the NASBench-201 dataset. As such we make use of cloned code from this repository. We have left the copyright notices in the code that has been cloned, which includes the name of the author of the open source library that our code relies on.

The datasets can also be downloaded as instructed from the NASBench-201 README: https://github.com/D-X-Y/NAS-Bench-201.

To reproduce our results:

conda env create -f environment.yml

conda activate nas-wot
./reproduce.sh 3 # average accuracy over 3 runs
./reproduce.sh 500 # average accuracy over 500 runs (this will take longer)

Each command will finish by calling process_results.py, which will print a table. ./reproduce.sh 3 should print the following table:

Method Search time (s) CIFAR-10 (val) CIFAR-10 (test) CIFAR-100 (val) CIFAR-100 (test) ImageNet16-120 (val) ImageNet16-120 (test)
Ours (N=10) 1.73435 88.99 +- 0.24 92.42 +- 0.33 67.86 +- 0.49 67.54 +- 0.75 41.16 +- 2.31 40.98 +- 2.72
Ours (N=100) 17.4139 89.18 +- 0.29 91.76 +- 1.28 67.17 +- 2.79 67.27 +- 2.68 40.84 +- 5.36 41.33 +- 5.74

./reproduce 500 will produce the following table (which is the same as what we report in the paper):

Method Search time (s) CIFAR-10 (val) CIFAR-10 (test) CIFAR-100 (val) CIFAR-100 (test) ImageNet16-120 (val) ImageNet16-120 (test)
Ours (N=10) 1.73435 89.25 +- 0.08 92.21 +- 0.11 68.53 +- 0.17 68.40 +- 0.14 40.42 +- 1.15 40.66 +- 0.97
Ours (N=100) 17.4139 88.45 +- 1.46 91.61 +- 1.71 66.42 +- 3.27 66.56 +- 3.28 36.56 +- 6.70 36.37 +- 6.97

To try different sample sizes, simply change the --n_samples argument in the call to search.py, and update the list of sample sizes on line 51 of process_results.py.

The code is licensed under the MIT licence.