diff --git a/README.md b/README.md index 7c2f4af..a23067c 100644 --- a/README.md +++ b/README.md @@ -17,8 +17,12 @@ Some methods use knowledge distillation (KD), which require pre-trained models. ## [Network Pruning via Transformable Architecture Search](https://arxiv.org/abs/1905.09717) In this paper, we proposed a differentiable searching strategy for transformable architectures, i.e., searching for the depth and width of a deep neural network. +You could see the highlight of our Transformable Architecture Search (TAS) at our [project page](https://xuanyidong.com/assets/projects/NeurIPS-2019-TAS.html). - +

+ + +

### Usage @@ -46,9 +50,10 @@ args: `cifar10` indicates the dataset name, `ResNet56` indicates the basemodel n ## One-Shot Neural Architecture Search via Self-Evaluated Template Network + + Highlight: we equip one-shot NAS with an architecture sampler and train network weights using uniformly sampling. - ### Usage @@ -64,9 +69,10 @@ Searching codes come soon! ## [Searching for A Robust Neural Architecture in Four GPU Hours](http://openaccess.thecvf.com/content_CVPR_2019/papers/Dong_Searching_for_a_Robust_Neural_Architecture_in_Four_GPU_Hours_CVPR_2019_paper.pdf) -We proposed a gradient-based searching algorithm using differentiable architecture sampling (improving DARTS with Gumbel-softmax sampling). - + + +We proposed a gradient-based searching algorithm using differentiable architecture sampling (improving DARTS with Gumbel-softmax sampling). The old version is located at [`others/GDAS`](https://github.com/D-X-Y/NAS-Projects/tree/master/others/GDAS) and a paddlepaddle implementation is locate at [`others/paddlepaddle`](https://github.com/D-X-Y/NAS-Projects/tree/master/others/paddlepaddle).