diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index c8077fe..0cc0ef2 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -19,7 +19,7 @@ This section guides you through submitting a bug report for AutoDL-Projects.
Following these guidelines helps maintainers and the community understand your report :pencil:, reproduce the behavior :computer: :computer:, and find related reports :mag_right:.
When you are creating a bug report, please include as many details as possible.
-Fill out [the required template](https://github.com/D-X-Y/AutoDL-Projects/blob/master/.github/ISSUE_TEMPLATE/bug-report.md). The information it asks for helps us resolve issues faster.
+Fill out [the required template](https://github.com/D-X-Y/AutoDL-Projects/blob/main/.github/ISSUE_TEMPLATE/bug-report.md). The information it asks for helps us resolve issues faster.
> **Note:** If you find a **Closed** issue that seems like it is the same thing that you're experiencing, open a new issue and include a link to the original issue in the body of your new one.
diff --git a/.gitmodules b/.gitmodules
index e693015..e8aba82 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,6 +1,3 @@
-[submodule "qlib-git"]
- path = .latent-data/qlib
- url = git@github.com:microsoft/qlib.git
[submodule ".latent-data/qlib"]
path = .latent-data/qlib
url = git@github.com:microsoft/qlib.git
diff --git a/README.md b/README.md
index 934b184..ad35d6d 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@
Automated Deep Learning Projects (AutoDL-Projects) is an open source, lightweight, but useful project for everyone.
This project implemented several neural architecture search (NAS) and hyper-parameter optimization (HPO) algorithms.
-中文介绍见[README_CN.md](README_CN.md)
+中文介绍见[README_CN.md](https://github.com/D-X-Y/AutoDL-Projects/tree/main/docs/README_CN.md)
**Who should consider using AutoDL-Projects**
@@ -36,38 +36,38 @@ At this moment, this project provides the following algorithms and scripts to ru
NAS |
TAS |
Network Pruning via Transformable Architecture Search |
- NeurIPS-2019-TAS.md |
+ NeurIPS-2019-TAS.md |
DARTS |
DARTS: Differentiable Architecture Search |
- ICLR-2019-DARTS.md |
+ ICLR-2019-DARTS.md |
GDAS |
Searching for A Robust Neural Architecture in Four GPU Hours |
- CVPR-2019-GDAS.md |
+ CVPR-2019-GDAS.md |
SETN |
One-Shot Neural Architecture Search via Self-Evaluated Template Network |
- ICCV-2019-SETN.md |
+ ICCV-2019-SETN.md |
NAS-Bench-201 |
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search |
- NAS-Bench-201.md |
+ NAS-Bench-201.md |
NATS-Bench |
NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size |
- NATS-Bench.md |
+ NATS-Bench.md |
... |
ENAS / REA / REINFORCE / BOHB |
Please check the original papers |
- NAS-Bench-201.md NATS-Bench.md |
+ NAS-Bench-201.md NATS-Bench.md |
HPO |
@@ -79,7 +79,7 @@ At this moment, this project provides the following algorithms and scripts to ru
Basic |
ResNet |
Deep Learning-based Image Classification |
- BASELINE.md |
+ BASELINE.md |
diff --git a/docs/CVPR-2019-GDAS.md b/docs/CVPR-2019-GDAS.md
index f6f2be5..5d43889 100644
--- a/docs/CVPR-2019-GDAS.md
+++ b/docs/CVPR-2019-GDAS.md
@@ -22,7 +22,7 @@ from utils import get_model_infos
flop, param = get_model_infos(net, (1,3,32,32))
```
-2. Different NAS-searched architectures are defined [here](https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/nas_infer_model/DXYs/genotypes.py).
+2. Different NAS-searched architectures are defined [here](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/nas_infer_model/DXYs/genotypes.py).
## Usage
@@ -34,7 +34,7 @@ CUDA_VISIBLE_DEVICES=0 bash ./scripts/nas-infer-train.sh cifar10 GDAS_V1 96 -1
CUDA_VISIBLE_DEVICES=0 bash ./scripts/nas-infer-train.sh cifar100 GDAS_V1 96 -1
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./scripts/nas-infer-train.sh imagenet-1k GDAS_V1 256 -1
```
-If you are interested in the configs of each NAS-searched architecture, they are defined at [genotypes.py](https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/nas_infer_model/DXYs/genotypes.py).
+If you are interested in the configs of each NAS-searched architecture, they are defined at [genotypes.py](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/nas_infer_model/DXYs/genotypes.py).
### Searching on the NASNet search space
diff --git a/docs/ICCV-2019-SETN.md b/docs/ICCV-2019-SETN.md
index 2050984..c818fd5 100644
--- a/docs/ICCV-2019-SETN.md
+++ b/docs/ICCV-2019-SETN.md
@@ -18,7 +18,7 @@ from utils import get_model_infos
flop, param = get_model_infos(net, (1,3,32,32))
```
-2. Different NAS-searched architectures are defined [here](https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/nas_infer_model/DXYs/genotypes.py).
+2. Different NAS-searched architectures are defined [here](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/nas_infer_model/DXYs/genotypes.py).
## Usage
diff --git a/docs/ICLR-2019-DARTS.md b/docs/ICLR-2019-DARTS.md
index a9a6bfe..eaf3b8b 100644
--- a/docs/ICLR-2019-DARTS.md
+++ b/docs/ICLR-2019-DARTS.md
@@ -16,7 +16,7 @@ This command will start to use the first-order DARTS to search architectures on
CUDA_VISIBLE_DEVICES=0 bash ./scripts-search/DARTS1V-search-NASNet-space.sh cifar10 -1
```
-After searching, if you want to train the searched architecture found by the above scripts, you need to add the config of that architecture (will be printed in log) in [genotypes.py](https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/nas_infer_model/DXYs/genotypes.py).
+After searching, if you want to train the searched architecture found by the above scripts, you need to add the config of that architecture (will be printed in log) in [genotypes.py](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/nas_infer_model/DXYs/genotypes.py).
In future, I will add a more eligent way to train the searched architecture from the DARTS search space.
diff --git a/docs/NAS-Bench-201-PURE.md b/docs/NAS-Bench-201-PURE.md
index 8a1ac54..f46a12d 100644
--- a/docs/NAS-Bench-201-PURE.md
+++ b/docs/NAS-Bench-201-PURE.md
@@ -1,6 +1,6 @@
# [NAS-BENCH-201: Extending the Scope of Reproducible Neural Architecture Search](https://openreview.net/forum?id=HJxyZkBKDr)
-**Since our NAS-BENCH-201 has been extended to NATS-Bench, this `README` is deprecated and not maintained. Please use [NATS-Bench](https://github.com/D-X-Y/AutoDL-Projects/blob/master/docs/NATS-Bench.md), which has 5x more architecture information and faster API than NAS-BENCH-201.**
+**Since our NAS-BENCH-201 has been extended to NATS-Bench, this `README` is deprecated and not maintained. Please use [NATS-Bench](https://github.com/D-X-Y/AutoDL-Projects/blob/main/docs/NATS-Bench.md), which has 5x more architecture information and faster API than NAS-BENCH-201.**
We propose an algorithm-agnostic NAS benchmark (NAS-Bench-201) with a fixed search space, which provides a unified benchmark for almost any up-to-date NAS algorithms.
The design of our search space is inspired by that used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph.
@@ -44,7 +44,7 @@ It is recommended to put these data into `$TORCH_HOME` (`~/.torch/` by default).
## How to Use NAS-Bench-201
-**More usage can be found in [our test codes](https://github.com/D-X-Y/AutoDL-Projects/blob/master/exps/NAS-Bench-201/test-nas-api.py)**.
+**More usage can be found in [our test codes](https://github.com/D-X-Y/AutoDL-Projects/blob/main/exps/NAS-Bench-201/test-nas-api.py)**.
1. Creating an API instance from a file:
```
@@ -161,7 +161,7 @@ api.reload('{:}/{:}'.format(os.environ['TORCH_HOME'], 'NAS-BENCH-201-4-v1.0-arch
weights = api.get_net_param(3, 'cifar10', None) # Obtaining the weights of all trials for the 3-th architecture on cifar10. It will returns a dict, where the key is the seed and the value is the trained weights.
```
-To obtain the training and evaluation information (please see the comments [here](https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/nas_201_api/api_201.py#L142)):
+To obtain the training and evaluation information (please see the comments [here](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/nas_201_api/api_201.py#L142)):
```
api.get_more_info(112, 'cifar10', None, hp='200', is_random=True)
# Query info of last training epoch for 112-th architecture
diff --git a/docs/NAS-Bench-201.md b/docs/NAS-Bench-201.md
index dc233a9..10b0224 100644
--- a/docs/NAS-Bench-201.md
+++ b/docs/NAS-Bench-201.md
@@ -1,6 +1,6 @@
# [NAS-BENCH-201: Extending the Scope of Reproducible Neural Architecture Search](https://openreview.net/forum?id=HJxyZkBKDr)
-**Since our NAS-BENCH-201 has been extended to NATS-Bench, this README is deprecated and not maintained. Please use [NATS-Bench](https://github.com/D-X-Y/AutoDL-Projects/blob/master/docs/NATS-Bench.md), which has 5x more architecture information and faster API than NAS-BENCH-201.**
+**Since our NAS-BENCH-201 has been extended to NATS-Bench, this README is deprecated and not maintained. Please use [NATS-Bench](https://github.com/D-X-Y/AutoDL-Projects/blob/main/docs/NATS-Bench.md), which has 5x more architecture information and faster API than NAS-BENCH-201.**
We propose an algorithm-agnostic NAS benchmark (NAS-Bench-201) with a fixed search space, which provides a unified benchmark for almost any up-to-date NAS algorithms.
The design of our search space is inspired by that used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph.
@@ -42,7 +42,7 @@ It is recommended to put these data into `$TORCH_HOME` (`~/.torch/` by default).
## How to Use NAS-Bench-201
-**More usage can be found in [our test codes](https://github.com/D-X-Y/AutoDL-Projects/blob/master/exps/NAS-Bench-201/test-nas-api.py)**.
+**More usage can be found in [our test codes](https://github.com/D-X-Y/AutoDL-Projects/blob/main/exps/NAS-Bench-201/test-nas-api.py)**.
1. Creating an API instance from a file:
```
@@ -159,7 +159,7 @@ api.reload('{:}/{:}'.format(os.environ['TORCH_HOME'], 'NAS-BENCH-201-4-v1.0-arch
weights = api.get_net_param(3, 'cifar10', None) # Obtaining the weights of all trials for the 3-th architecture on cifar10. It will returns a dict, where the key is the seed and the value is the trained weights.
```
-To obtain the training and evaluation information (please see the comments [here](https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/nas_201_api/api_201.py#L142)):
+To obtain the training and evaluation information (please see the comments [here](https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/nas_201_api/api_201.py#L142)):
```
api.get_more_info(112, 'cifar10', None, hp='200', is_random=True)
# Query info of last training epoch for 112-th architecture
diff --git a/docs/NeurIPS-2019-TAS.md b/docs/NeurIPS-2019-TAS.md
index 674503b..12c6bf4 100644
--- a/docs/NeurIPS-2019-TAS.md
+++ b/docs/NeurIPS-2019-TAS.md
@@ -31,7 +31,7 @@ args: `cifar10` indicates the dataset name, `ResNet56` indicates the basemodel n
**Model Configuration**
-The searched shapes for ResNet-20/32/56/110/164 and ResNet-18/50 in Table 3/4 in the original paper are listed in [`configs/NeurIPS-2019`](https://github.com/D-X-Y/AutoDL-Projects/tree/master/configs/NeurIPS-2019).
+The searched shapes for ResNet-20/32/56/110/164 and ResNet-18/50 in Table 3/4 in the original paper are listed in [`configs/NeurIPS-2019`](https://github.com/D-X-Y/AutoDL-Projects/tree/main/configs/NeurIPS-2019).
**Search for the depth configuration of ResNet**
```
diff --git a/README_CN.md b/docs/README_CN.md
similarity index 89%
rename from README_CN.md
rename to docs/README_CN.md
index 7101a80..a77a81a 100644
--- a/README_CN.md
+++ b/docs/README_CN.md
@@ -37,38 +37,38 @@
NAS |
TAS |
Network Pruning via Transformable Architecture Search |
- NeurIPS-2019-TAS.md |
+ NeurIPS-2019-TAS.md |
DARTS |
DARTS: Differentiable Architecture Search |
- ICLR-2019-DARTS.md |
+ ICLR-2019-DARTS.md |
GDAS |
Searching for A Robust Neural Architecture in Four GPU Hours |
- CVPR-2019-GDAS.md |
+ CVPR-2019-GDAS.md |
SETN |
One-Shot Neural Architecture Search via Self-Evaluated Template Network |
- ICCV-2019-SETN.md |
+ ICCV-2019-SETN.md |
NAS-Bench-201 |
NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search |
- NAS-Bench-201.md |
+ NAS-Bench-201.md |
NATS-Bench |
NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size |
- NATS-Bench.md |
+ NATS-Bench.md |
... |
ENAS / REA / REINFORCE / BOHB |
Please check the original papers. |
- NAS-Bench-201.md NATS-Bench.md |
+ NAS-Bench-201.md NATS-Bench.md |
HPO |
@@ -80,7 +80,7 @@
Basic |
ResNet |
Deep Learning-based Image Classification |
- BASELINE.md |
+ BASELINE.md |
diff --git a/exps/NATS-algos/search-size.py b/exps/NATS-algos/search-size.py
index e215523..0cfba71 100644
--- a/exps/NATS-algos/search-size.py
+++ b/exps/NATS-algos/search-size.py
@@ -8,7 +8,7 @@
# - masking + sampling (mask_rl) from "Can Weight Sharing Outperform Random Architecture Search? An Investigation With TuNAS, CVPR 2020"
#
# For simplicity, we use tas, mask_gumbel, and mask_rl to refer these three strategies. Their official implementations are at the following links:
-# - TAS: https://github.com/D-X-Y/AutoDL-Projects/blob/master/docs/NeurIPS-2019-TAS.md
+# - TAS: https://github.com/D-X-Y/AutoDL-Projects/blob/main/docs/NeurIPS-2019-TAS.md
# - FBNetV2: https://github.com/facebookresearch/mobile-vision
# - TuNAS: https://github.com/google-research/google-research/tree/master/tunas
####
diff --git a/lib/nas_201_api/api_201.py b/lib/nas_201_api/api_201.py
index 8c995c9..ef7e943 100644
--- a/lib/nas_201_api/api_201.py
+++ b/lib/nas_201_api/api_201.py
@@ -244,7 +244,7 @@ class NASBench201API(NASBenchMetaAPI):
arch_str: the input is a string indicates the architecture topology, such as
|nor_conv_1x1~0|+|none~0|none~1|+|none~0|none~1|skip_connect~2|
search_space: a list of operation string, the default list is the search space for NAS-Bench-201
- the default value should be be consistent with this line https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/models/cell_operations.py#L24
+ the default value should be be consistent with this line https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/models/cell_operations.py#L24
:return
the numpy matrix (2-D np.ndarray) representing the DAG of this architecture topology
:usage
diff --git a/lib/nats_bench/api_topology.py b/lib/nats_bench/api_topology.py
index 50cb74d..05a633d 100644
--- a/lib/nats_bench/api_topology.py
+++ b/lib/nats_bench/api_topology.py
@@ -306,7 +306,7 @@ class NATStopology(NASBenchMetaAPI):
arch_str: the input is a string indicates the architecture topology, such as
|nor_conv_1x1~0|+|none~0|none~1|+|none~0|none~1|skip_connect~2|
search_space: a list of operation string, the default list is the topology search space for NATS-BENCH.
- the default value should be be consistent with this line https://github.com/D-X-Y/AutoDL-Projects/blob/master/lib/models/cell_operations.py#L24
+ the default value should be be consistent with this line https://github.com/D-X-Y/AutoDL-Projects/blob/main/lib/models/cell_operations.py#L24
Returns:
the numpy matrix (2-D np.ndarray) representing the DAG of this architecture topology