Go to file
2020-06-17 13:44:25 +01:00
config_utils Initial commit 2020-06-03 12:59:01 +01:00
datasets Initial commit 2020-06-03 12:59:01 +01:00
models Initial commit 2020-06-03 12:59:01 +01:00
results Add histogram plotting code 2020-06-17 13:43:08 +01:00
.gitignore Double check 2020-06-05 09:23:36 +01:00
environment.yml Final check 2020-06-03 17:10:23 +01:00
LICENCE Update LICENCE 2020-06-08 10:59:45 +01:00
plot_histograms.py Add histogram plotting code 2020-06-17 13:44:25 +01:00
process_results.py Final check 2020-06-03 17:10:23 +01:00
README.md Add histogram plotting code 2020-06-17 13:44:25 +01:00
reproduce.sh Final check 2020-06-03 17:10:23 +01:00
search.py With table generator 2020-06-03 15:59:48 +01:00

Neural Architecture Search Without Training

This repository contains code for replicating our paper, NAS Without Training.

Setup

  1. Download the datasets.
  2. Download NAS-Bench-201.
  3. Install the requirements in a conda environment with conda env create -f environment.yml.

We also refer the reader to instructions in the official NASBench-201 README.

Reproducing our results

To reproduce our results:

conda activate nas-wot
./reproduce.sh 3 # average accuracy over 3 runs
./reproduce.sh 500 # average accuracy over 500 runs (this will take longer)

Each command will finish by calling process_results.py, which will print a table. ./reproduce.sh 3 should print the following table:

Method Search time (s) CIFAR-10 (val) CIFAR-10 (test) CIFAR-100 (val) CIFAR-100 (test) ImageNet16-120 (val) ImageNet16-120 (test)
Ours (N=10) 1.73435 89.25 +- 0.08 92.21 +- 0.11 68.53 +- 0.17 68.40 +- 0.14 40.42 +- 1.15 40.66 +- 0.97
Ours (N=100) 17.4139 89.18 +- 0.29 91.76 +- 1.28 67.17 +- 2.79 67.27 +- 2.68 40.84 +- 5.36 41.33 +- 5.74

./reproduce 500 will produce the following table (which is the same as what we report in the paper):

Method Search time (s) CIFAR-10 (val) CIFAR-10 (test) CIFAR-100 (val) CIFAR-100 (test) ImageNet16-120 (val) ImageNet16-120 (test)
Ours (N=10) 1.73435 88.47 +- 1.33 91.53 +- 1.62 66.49 +- 3.08 66.63 +- 3.14 38.33 +- 4.98 38.33 +- 5.22
Ours (N=100) 17.4139 88.45 +- 1.46 91.61 +- 1.71 66.42 +- 3.27 66.56 +- 3.28 36.56 +- 6.70 36.37 +- 6.97

To try different sample sizes, simply change the --n_samples argument in the call to search.py, and update the list of sample sizes this line of process_results.py.

Note that search times may vary from the reported result owing to hardware setup.

Plotting histograms

In order to plot the histograms in Figure 1 of the paper, run:

python plot_histograms.py

to produce:

alt text

The code is licensed under the MIT licence.

Acknowledgements

This repository makes liberal use of code from the AutoDL library. We also rely on NAS-Bench-201.

Citing us

If you use or build on our work, please consider citing us:

@misc{mellor2020neural,
    title={Neural Architecture Search without Training},
    author={Joseph Mellor and Jack Turner and Amos Storkey and Elliot J. Crowley},
    year={2020},
    eprint={2006.04647},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}