Compare commits

...

16 Commits

Author SHA1 Message Date
889bd1974c merged 2024-10-14 23:24:24 +02:00
af0e7786b6 just play around 2024-10-14 23:20:28 +02:00
c6d53f08ae can train aircraft now 2024-10-14 23:19:49 +02:00
ef2608bb42 can train aircraft now 2024-10-14 23:19:28 +02:00
mhz
50ff507a15 wait to test different seed 2024-07-19 15:31:43 +02:00
mhz
03d7d04d41 add some test results and a test script 2024-07-17 16:34:24 +02:00
bb33ca9a68 run the specific model 2024-07-11 11:48:51 +02:00
D-X-Y
f46486e21b Update README.md 2022-04-24 15:18:16 -07:00
D-X-Y
5908a1edef Merge pull request #123 from Yulv-git/main
Update some links in README_CN.md and fix some typos.
2022-04-24 15:16:21 -07:00
Yulv-git
ed34024a88 Update some links in README_CN.md and fix some typos. 2022-04-23 10:59:49 +08:00
D-X-Y
5bf036a763 Update DKS exploration 2022-03-28 21:28:50 -07:00
D-X-Y
b557a22928 Merge pull request #121 from ain-soph/patch-1
remove numpy version requirements
2022-03-25 00:05:53 -07:00
Ren Pang
f549ed2e61 fix setup bug 2022-03-24 21:06:28 -04:00
Local State
5a5cb82537 remove numpy version requirements
Is it possible to remove numpy version requirements?

I want to use the benchmark, but my codes are relying on some new bug fixes after `numpy>1.20`.
2022-03-24 16:50:19 -04:00
D-X-Y
676e8e411d Upgrade black to 22.1.0 and fix the corresponding issues 2022-03-20 23:18:23 -07:00
D-X-Y
8d0799dfb1 To answer issue #119 2022-03-20 23:12:12 -07:00
23 changed files with 105186 additions and 65 deletions

View File

@@ -41,7 +41,7 @@ jobs:
- name: Install XAutoDL from source
run: |
python setup.py install
pip install .
- name: Test Search Space
run: |

View File

@@ -26,7 +26,7 @@ jobs:
- name: Install XAutoDL from source
run: |
python setup.py install
pip install .
- name: Test Xmisc
run: |

View File

@@ -26,7 +26,7 @@ jobs:
- name: Install XAutoDL from source
run: |
python setup.py install
pip install .
- name: Test Super Model
run: |

View File

@@ -61,13 +61,13 @@ At this moment, this project provides the following algorithms and scripts to ru
<tr> <!-- (6-th row) -->
<td align="center" valign="middle"> NATS-Bench </td>
<td align="center" valign="middle"> <a href="https://xuanyidong.com/assets/projects/NATS-Bench"> NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size</a> </td>
<td align="center" valign="middle"> <a href="https://github.com/D-X-Y/NATS-Bench">NATS-Bench.md</a> </td>
<td align="center" valign="middle"> <a href="https://github.com/D-X-Y/NATS-Bench/blob/main/README.md">NATS-Bench.md</a> </td>
</tr>
<tr> <!-- (7-th row) -->
<td align="center" valign="middle"> ... </td>
<td align="center" valign="middle"> ENAS / REA / REINFORCE / BOHB </td>
<td align="center" valign="middle"> Please check the original papers </td>
<td align="center" valign="middle"> <a href="https://github.com/D-X-Y/AutoDL-Projects/tree/main/docs/NAS-Bench-201.md">NAS-Bench-201.md</a> <a href="https://github.com/D-X-Y/NATS-Bench">NATS-Bench.md</a> </td>
<td align="center" valign="middle"> <a href="https://github.com/D-X-Y/AutoDL-Projects/tree/main/docs/NAS-Bench-201.md">NAS-Bench-201.md</a> <a href="https://github.com/D-X-Y/NATS-Bench/blob/main/README.md">NATS-Bench.md</a> </td>
</tr>
<tr> <!-- (start second block) -->
<td rowspan="1" align="center" valign="middle" halign="middle"> HPO </td>
@@ -89,7 +89,7 @@ At this moment, this project provides the following algorithms and scripts to ru
## Requirements and Preparation
**First of all**, please use `python setup.py install` to install `xautodl` library.
**First of all**, please use `pip install .` to install `xautodl` library.
Please install `Python>=3.6` and `PyTorch>=1.5.0`. (You could use lower versions of Python and PyTorch, but may have bugs).
Some visualization codes may require `opencv`.

View File

@@ -29,7 +29,7 @@ You can simply type `pip install nas-bench-201` to install our api. Please see s
You can move it to anywhere you want and send its path to our API for initialization.
- [2020.02.25] APIv1.0/FILEv1.0: [`NAS-Bench-201-v1_0-e61699.pth`](https://drive.google.com/open?id=1SKW0Cu0u8-gb18zDpaAGi0f74UdXeGKs) (2.2G), where `e61699` is the last six digits for this file. It contains all information except for the trained weights of each trial.
- [2020.02.25] APIv1.0/FILEv1.0: The full data of each architecture can be download from [
NAS-BENCH-201-4-v1.0-archive.tar](https://drive.google.com/open?id=1X2i-JXaElsnVLuGgM4tP-yNwtsspXgdQ) (about 226GB). This compressed folder has 15625 files containing the the trained weights.
NAS-BENCH-201-4-v1.0-archive.tar](https://drive.google.com/open?id=1X2i-JXaElsnVLuGgM4tP-yNwtsspXgdQ) (about 226GB). This compressed folder has 15625 files containing the trained weights.
- [2020.02.25] APIv1.0/FILEv1.0: Checkpoints for 3 runs of each baseline NAS algorithm are provided in [Google Drive](https://drive.google.com/open?id=1eAgLZQAViP3r6dA0_ZOOGG9zPLXhGwXi).
- [2020.03.09] APIv1.2/FILEv1.0: More robust API with more functions and descriptions
- [2020.03.16] APIv1.3/FILEv1.1: [`NAS-Bench-201-v1_1-096897.pth`](https://drive.google.com/open?id=16Y0UwGisiouVRxW-W5hEtbxmcHw_0hF_) (4.7G), where `096897` is the last six digits for this file. It contains information of more trials compared to `NAS-Bench-201-v1_0-e61699.pth`, especially all models trained by 12 epochs on all datasets are avaliable.

View File

@@ -27,7 +27,7 @@ You can simply type `pip install nas-bench-201` to install our api. Please see s
You can move it to anywhere you want and send its path to our API for initialization.
- [2020.02.25] APIv1.0/FILEv1.0: [`NAS-Bench-201-v1_0-e61699.pth`](https://drive.google.com/open?id=1SKW0Cu0u8-gb18zDpaAGi0f74UdXeGKs) (2.2G), where `e61699` is the last six digits for this file. It contains all information except for the trained weights of each trial.
- [2020.02.25] APIv1.0/FILEv1.0: The full data of each architecture can be download from [
NAS-BENCH-201-4-v1.0-archive.tar](https://drive.google.com/open?id=1X2i-JXaElsnVLuGgM4tP-yNwtsspXgdQ) (about 226GB). This compressed folder has 15625 files containing the the trained weights.
NAS-BENCH-201-4-v1.0-archive.tar](https://drive.google.com/open?id=1X2i-JXaElsnVLuGgM4tP-yNwtsspXgdQ) (about 226GB). This compressed folder has 15625 files containing the trained weights.
- [2020.02.25] APIv1.0/FILEv1.0: Checkpoints for 3 runs of each baseline NAS algorithm are provided in [Google Drive](https://drive.google.com/open?id=1eAgLZQAViP3r6dA0_ZOOGG9zPLXhGwXi).
- [2020.03.09] APIv1.2/FILEv1.0: More robust API with more functions and descriptions
- [2020.03.16] APIv1.3/FILEv1.1: [`NAS-Bench-201-v1_1-096897.pth`](https://drive.google.com/open?id=16Y0UwGisiouVRxW-W5hEtbxmcHw_0hF_) (4.7G), where `096897` is the last six digits for this file. It contains information of more trials compared to `NAS-Bench-201-v1_0-e61699.pth`, especially all models trained by 12 epochs on all datasets are avaliable.

View File

@@ -3,7 +3,7 @@
</p>
---------
[![MIT licensed](https://img.shields.io/badge/license-MIT-brightgreen.svg)](LICENSE.md)
[![MIT licensed](https://img.shields.io/badge/license-MIT-brightgreen.svg)](../LICENSE.md)
自动深度学习库 (AutoDL-Projects) 是一个开源的,轻量级的,功能强大的项目。
该项目实现了多种网络结构搜索(NAS)和超参数优化(HPO)算法。
@@ -142,8 +142,8 @@
# 其他
如果你想要给这份代码库做贡献,请看[CONTRIBUTING.md](.github/CONTRIBUTING.md)。
此外,使用规范请参考[CODE-OF-CONDUCT.md](.github/CODE-OF-CONDUCT.md)。
如果你想要给这份代码库做贡献,请看[CONTRIBUTING.md](../.github/CONTRIBUTING.md)。
此外,使用规范请参考[CODE-OF-CONDUCT.md](../.github/CODE-OF-CONDUCT.md)。
# 许可证
The entire codebase is under [MIT license](LICENSE.md)
The entire codebase is under [MIT license](../LICENSE.md)

View File

@@ -2,11 +2,11 @@
# Copyright (c) Xuanyi Dong [GitHub D-X-Y], 2019.08 #
#####################################################
import time, torch
from procedures import prepare_seed, get_optim_scheduler
from utils import get_model_infos, obtain_accuracy
from config_utils import dict2config
from log_utils import AverageMeter, time_string, convert_secs2time
from models import get_cell_based_tiny_net
from xautodl.procedures import prepare_seed, get_optim_scheduler
from xautodl.utils import get_model_infos, obtain_accuracy
from xautodl.config_utils import dict2config
from xautodl.log_utils import AverageMeter, time_string, convert_secs2time
from xautodl.models import get_cell_based_tiny_net
__all__ = ["evaluate_for_seed", "pure_evaluate"]

View File

@@ -16,8 +16,9 @@ from xautodl.procedures import get_machine_info
from xautodl.datasets import get_datasets
from xautodl.log_utils import Logger, AverageMeter, time_string, convert_secs2time
from xautodl.models import CellStructure, CellArchitectures, get_search_spaces
from xautodl.functions import evaluate_for_seed
from functions import evaluate_for_seed
from torchvision import datasets, transforms
def evaluate_all_datasets(
arch, datasets, xpaths, splits, use_less, seed, arch_config, workers, logger
@@ -46,47 +47,85 @@ def evaluate_all_datasets(
split_info = load_config(
"configs/nas-benchmark/{:}-split.txt".format(dataset), None, None
)
elif dataset.startswith("aircraft"):
if use_less:
config_path = "configs/nas-benchmark/LESS.config"
else:
config_path = "configs/nas-benchmark/aircraft.config"
split_info = load_config(
"configs/nas-benchmark/{:}-split.txt".format(dataset), None, None
)
else:
raise ValueError("invalid dataset : {:}".format(dataset))
config = load_config(
config_path, {"class_num": class_num, "xshape": xshape}, logger
)
# check whether use splited validation set
# if dataset == 'aircraft':
# split = True
if bool(split):
assert dataset == "cifar10"
ValLoaders = {
"ori-test": torch.utils.data.DataLoader(
valid_data,
if dataset == "cifar10" or dataset == "cifar100":
assert dataset == "cifar10"
ValLoaders = {
"ori-test": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
shuffle=False,
num_workers=workers,
pin_memory=True,
)
}
assert len(train_data) == len(split_info.train) + len(
split_info.valid
), "invalid length : {:} vs {:} + {:}".format(
len(train_data), len(split_info.train), len(split_info.valid)
)
train_data_v2 = deepcopy(train_data)
train_data_v2.transform = valid_data.transform
valid_data = train_data_v2
# data loader
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=config.batch_size,
shuffle=False,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.train),
num_workers=workers,
pin_memory=True,
)
}
assert len(train_data) == len(split_info.train) + len(
split_info.valid
), "invalid length : {:} vs {:} + {:}".format(
len(train_data), len(split_info.train), len(split_info.valid)
)
train_data_v2 = deepcopy(train_data)
train_data_v2.transform = valid_data.transform
valid_data = train_data_v2
# data loader
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.train),
num_workers=workers,
pin_memory=True,
)
valid_loader = torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.valid),
num_workers=workers,
pin_memory=True,
)
ValLoaders["x-valid"] = valid_loader
valid_loader = torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.valid),
num_workers=workers,
pin_memory=True,
)
ValLoaders["x-valid"] = valid_loader
elif dataset == "aircraft":
ValLoaders = {
"ori-test": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
shuffle=False,
num_workers=workers,
pin_memory=True,
)
}
train_data_v2 = deepcopy(train_data)
train_data_v2.transform = valid_data.transform
valid_data = train_data_v2
# 使用 DataLoader
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.train),
num_workers=workers,
pin_memory=True)
valid_loader = torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.valid),
num_workers=workers,
pin_memory=True)
else:
# data loader
train_loader = torch.utils.data.DataLoader(
@@ -103,7 +142,7 @@ def evaluate_all_datasets(
num_workers=workers,
pin_memory=True,
)
if dataset == "cifar10":
if dataset == "cifar10" or dataset == "aircraft":
ValLoaders = {"ori-test": valid_loader}
elif dataset == "cifar100":
cifar100_splits = load_config(

View File

@@ -24,6 +24,9 @@
# python ./exps/NATS-algos/search-cell.py --dataset cifar10 --data_path $TORCH_HOME/cifar.python --algo enas --arch_weight_decay 0 --arch_learning_rate 0.001 --arch_eps 0.001 --rand_seed 777
# python ./exps/NATS-algos/search-cell.py --dataset cifar100 --data_path $TORCH_HOME/cifar.python --algo enas --arch_weight_decay 0 --arch_learning_rate 0.001 --arch_eps 0.001 --rand_seed 777
# python ./exps/NATS-algos/search-cell.py --dataset ImageNet16-120 --data_path $TORCH_HOME/cifar.python/ImageNet16 --algo enas --arch_weight_decay 0 --arch_learning_rate 0.001 --arch_eps 0.001 --rand_seed 777
####
# The following scripts are added in 20 Mar 2022
# python ./exps/NATS-algos/search-cell.py --dataset cifar10 --data_path $TORCH_HOME/cifar.python --algo gdas_v1 --rand_seed 777
######################################################################################
import os, sys, time, random, argparse
import numpy as np
@@ -166,6 +169,8 @@ def search_func(
network.set_cal_mode("dynamic", sampled_arch)
elif algo == "gdas":
network.set_cal_mode("gdas", None)
elif algo == "gdas_v1":
network.set_cal_mode("gdas_v1", None)
elif algo.startswith("darts"):
network.set_cal_mode("joint", None)
elif algo == "random":
@@ -196,6 +201,8 @@ def search_func(
network.set_cal_mode("joint")
elif algo == "gdas":
network.set_cal_mode("gdas", None)
elif algo == "gdas_v1":
network.set_cal_mode("gdas_v1", None)
elif algo.startswith("darts"):
network.set_cal_mode("joint", None)
elif algo == "random":
@@ -373,7 +380,7 @@ def get_best_arch(xloader, network, n_samples, algo):
archs, valid_accs = network.return_topK(n_samples, True), []
elif algo == "setn":
archs, valid_accs = network.return_topK(n_samples, False), []
elif algo.startswith("darts") or algo == "gdas":
elif algo.startswith("darts") or algo == "gdas" or algo == "gdas_v1":
arch = network.genotype
archs, valid_accs = [arch], []
elif algo == "enas":
@@ -568,7 +575,7 @@ def main(xargs):
)
network.set_drop_path(float(epoch + 1) / total_epoch, xargs.drop_path_rate)
if xargs.algo == "gdas":
if xargs.algo == "gdas" or xargs.algo == "gdas_v1":
network.set_tau(
xargs.tau_max
- (xargs.tau_max - xargs.tau_min) * epoch / (total_epoch - 1)
@@ -632,6 +639,8 @@ def main(xargs):
network.set_cal_mode("dynamic", genotype)
elif xargs.algo == "gdas":
network.set_cal_mode("gdas", None)
elif xargs.algo == "gdas_v1":
network.set_cal_mode("gdas_v1", None)
elif xargs.algo.startswith("darts"):
network.set_cal_mode("joint", None)
elif xargs.algo == "random":
@@ -699,6 +708,8 @@ def main(xargs):
network.set_cal_mode("dynamic", genotype)
elif xargs.algo == "gdas":
network.set_cal_mode("gdas", None)
elif xargs.algo == "gdas_v1":
network.set_cal_mode("gdas_v1", None)
elif xargs.algo.startswith("darts"):
network.set_cal_mode("joint", None)
elif xargs.algo == "random":
@@ -747,7 +758,7 @@ if __name__ == "__main__":
parser.add_argument(
"--algo",
type=str,
choices=["darts-v1", "darts-v2", "gdas", "setn", "random", "enas"],
choices=["darts-v1", "darts-v2", "gdas", "gdas_v1", "setn", "random", "enas"],
help="The search space name.",
)
parser.add_argument(

View File

@@ -0,0 +1,57 @@
from dks.base.activation_getter import (
get_activation_function as _get_numpy_activation_function,
)
from dks.base.activation_transform import _get_activations_params
def subnet_max_func(x, r_fn):
depth = 7
res_x = r_fn(x)
x = r_fn(x)
for _ in range(depth):
x = r_fn(r_fn(x)) + x
return max(x, res_x)
def subnet_max_func_v2(x, r_fn):
depth = 2
res_x = r_fn(x)
x = r_fn(x)
for _ in range(depth):
x = 0.8 * r_fn(r_fn(x)) + 0.2 * x
return max(x, res_x)
def get_transformed_activations(
activation_names,
method="TAT",
dks_params=None,
tat_params=None,
max_slope_func=None,
max_curv_func=None,
subnet_max_func=None,
activation_getter=_get_numpy_activation_function,
):
params = _get_activations_params(
activation_names,
method=method,
dks_params=dks_params,
tat_params=tat_params,
max_slope_func=max_slope_func,
max_curv_func=max_curv_func,
subnet_max_func=subnet_max_func,
)
return params
params = get_transformed_activations(
["swish"], method="TAT", subnet_max_func=subnet_max_func
)
print(params)
params = get_transformed_activations(
["leaky_relu"], method="TAT", subnet_max_func=subnet_max_func_v2
)
print(params)

View File

@@ -28,16 +28,30 @@ else
mode=cover
fi
# OMP_NUM_THREADS=4 python ./exps/NAS-Bench-201/main.py \
# --mode ${mode} --save_dir ${save_dir} --max_node 4 \
# --use_less ${use_less} \
# --datasets cifar10 cifar10 cifar100 ImageNet16-120 \
# --splits 1 0 0 0 \
# --xpaths $TORCH_HOME/cifar.python \
# $TORCH_HOME/cifar.python \
# $TORCH_HOME/cifar.python \
# $TORCH_HOME/cifar.python/ImageNet16 \
# --channel 16 --num_cells 5 \
# --workers 4 \
# --srange ${xstart} ${xend} --arch_index ${arch_index} \
# --seeds ${all_seeds}
OMP_NUM_THREADS=4 python ./exps/NAS-Bench-201/main.py \
--mode ${mode} --save_dir ${save_dir} --max_node 4 \
--use_less ${use_less} \
--datasets cifar10 cifar10 cifar100 ImageNet16-120 \
--splits 1 0 0 0 \
--xpaths $TORCH_HOME/cifar.python \
$TORCH_HOME/cifar.python \
$TORCH_HOME/cifar.python \
$TORCH_HOME/cifar.python/ImageNet16 \
--channel 16 --num_cells 5 \
--datasets aircraft \
--xpaths /lustre/hpe/ws11/ws11.1/ws/xmuhanma-SWAP/train_datasets/datasets/fgvc-aircraft-2013b/data/ \
--channel 16 \
--splits 1 \
--num_cells 5 \
--workers 4 \
--srange ${xstart} ${xend} --arch_index ${arch_index} \
--seeds ${all_seeds}

View File

@@ -37,7 +37,7 @@ def read(fname="README.md"):
# What packages are required for this module to be executed?
REQUIRED = ["numpy>=1.16.5,<=1.19.5", "pyyaml>=5.0.0", "fvcore"]
REQUIRED = ["numpy>=1.16.5", "pyyaml>=5.0.0", "fvcore"]
packages = find_packages(
exclude=("tests", "scripts", "scripts-search", "lib*", "exps*")

104336
test.ipynb Normal file

File diff suppressed because it is too large Load Diff

616
test_network.py Normal file
View File

@@ -0,0 +1,616 @@
from nas_201_api import NASBench201API as API
import os
import os, sys, time, torch, random, argparse
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
from copy import deepcopy
from pathlib import Path
from xautodl.config_utils import load_config
from xautodl.procedures import save_checkpoint, copy_checkpoint
from xautodl.procedures import get_machine_info
from xautodl.datasets import get_datasets
from xautodl.log_utils import Logger, AverageMeter, time_string, convert_secs2time
from xautodl.models import CellStructure, CellArchitectures, get_search_spaces
import time, torch
from xautodl.procedures import prepare_seed, get_optim_scheduler
from xautodl.utils import get_model_infos, obtain_accuracy
from xautodl.config_utils import dict2config
from xautodl.log_utils import AverageMeter, time_string, convert_secs2time
from xautodl.models import get_cell_based_tiny_net
cur_path = os.path.abspath(os.path.curdir)
data_path = os.path.join(cur_path, 'NAS-Bench-201-v1_1-096897.pth')
print(f'loading data from {data_path}')
print(f'loading')
api = API(data_path)
print(f'loaded')
def find_best_index(dataset):
len = 15625
accs = []
for i in range(1, len):
results = api.query_by_index(i, dataset)
dict_items = list(results.items())
train_info = dict_items[0][1].get_train()
acc = train_info['accuracy']
accs.append((i, acc))
return max(accs, key=lambda x: x[1])
best_cifar_10_index, best_cifar_10_acc = find_best_index('cifar10')
best_cifar_100_index, best_cifar_100_acc = find_best_index('cifar100')
best_ImageNet16_index, best_ImageNet16_acc= find_best_index('ImageNet16-120')
print(f'find best cifar10 index: {best_cifar_10_index}, acc: {best_cifar_10_acc}')
print(f'find best cifar100 index: {best_cifar_100_index}, acc: {best_cifar_100_acc}')
print(f'find best ImageNet16 index: {best_ImageNet16_index}, acc: {best_ImageNet16_acc}')
from xautodl.models import get_cell_based_tiny_net
def get_network_str_by_id(id, dataset):
config = api.get_net_config(id, dataset)
return config['arch_str']
best_cifar_10_str = get_network_str_by_id(best_cifar_10_index, 'cifar10')
best_cifar_100_str = get_network_str_by_id(best_cifar_100_index, 'cifar100')
best_ImageNet16_str = get_network_str_by_id(best_ImageNet16_index, 'ImageNet16-120')
def evaluate_all_datasets(
arch, datasets, xpaths, splits, use_less, seed, arch_config, workers, logger
):
machine_info, arch_config = get_machine_info(), deepcopy(arch_config)
all_infos = {"info": machine_info}
all_dataset_keys = []
# look all the datasets
for dataset, xpath, split in zip(datasets, xpaths, splits):
# train valid data
train_data, valid_data, xshape, class_num = get_datasets(dataset, xpath, -1)
# load the configuration
if dataset == "cifar10" or dataset == "cifar100":
if use_less:
config_path = "configs/nas-benchmark/LESS.config"
else:
config_path = "configs/nas-benchmark/CIFAR.config"
split_info = load_config(
"configs/nas-benchmark/cifar-split.txt", None, None
)
elif dataset.startswith("ImageNet16"):
if use_less:
config_path = "configs/nas-benchmark/LESS.config"
else:
config_path = "configs/nas-benchmark/ImageNet-16.config"
split_info = load_config(
"configs/nas-benchmark/{:}-split.txt".format(dataset), None, None
)
else:
raise ValueError("invalid dataset : {:}".format(dataset))
config = load_config(
config_path, {"class_num": class_num, "xshape": xshape}, logger
)
# check whether use splited validation set
if bool(split):
assert dataset == "cifar10"
ValLoaders = {
"ori-test": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
shuffle=False,
num_workers=workers,
pin_memory=True,
)
}
assert len(train_data) == len(split_info.train) + len(
split_info.valid
), "invalid length : {:} vs {:} + {:}".format(
len(train_data), len(split_info.train), len(split_info.valid)
)
train_data_v2 = deepcopy(train_data)
train_data_v2.transform = valid_data.transform
valid_data = train_data_v2
# data loader
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.train),
num_workers=workers,
pin_memory=True,
)
valid_loader = torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(split_info.valid),
num_workers=workers,
pin_memory=True,
)
ValLoaders["x-valid"] = valid_loader
else:
# data loader
train_loader = torch.utils.data.DataLoader(
train_data,
batch_size=config.batch_size,
shuffle=True,
num_workers=workers,
pin_memory=True,
)
valid_loader = torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
shuffle=False,
num_workers=workers,
pin_memory=True,
)
if dataset == "cifar10":
ValLoaders = {"ori-test": valid_loader}
elif dataset == "cifar100":
cifar100_splits = load_config(
"configs/nas-benchmark/cifar100-test-split.txt", None, None
)
ValLoaders = {
"ori-test": valid_loader,
"x-valid": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(
cifar100_splits.xvalid
),
num_workers=workers,
pin_memory=True,
),
"x-test": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(
cifar100_splits.xtest
),
num_workers=workers,
pin_memory=True,
),
}
elif dataset == "ImageNet16-120":
imagenet16_splits = load_config(
"configs/nas-benchmark/imagenet-16-120-test-split.txt", None, None
)
ValLoaders = {
"ori-test": valid_loader,
"x-valid": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(
imagenet16_splits.xvalid
),
num_workers=workers,
pin_memory=True,
),
"x-test": torch.utils.data.DataLoader(
valid_data,
batch_size=config.batch_size,
sampler=torch.utils.data.sampler.SubsetRandomSampler(
imagenet16_splits.xtest
),
num_workers=workers,
pin_memory=True,
),
}
else:
raise ValueError("invalid dataset : {:}".format(dataset))
dataset_key = "{:}".format(dataset)
if bool(split):
dataset_key = dataset_key + "-valid"
logger.log(
"Evaluate ||||||| {:10s} ||||||| Train-Num={:}, Valid-Num={:}, Train-Loader-Num={:}, Valid-Loader-Num={:}, batch size={:}".format(
dataset_key,
len(train_data),
len(valid_data),
len(train_loader),
len(valid_loader),
config.batch_size,
)
)
logger.log(
"Evaluate ||||||| {:10s} ||||||| Config={:}".format(dataset_key, config)
)
for key, value in ValLoaders.items():
logger.log(
"Evaluate ---->>>> {:10s} with {:} batchs".format(key, len(value))
)
results = evaluate_for_seed(
arch_config, config, arch, train_loader, ValLoaders, seed, logger
)
all_infos[dataset_key] = results
all_dataset_keys.append(dataset_key)
all_infos["all_dataset_keys"] = all_dataset_keys
return all_infos
def evaluate_for_seed(
arch_config, config, arch, train_loader, valid_loaders, seed, logger
):
prepare_seed(seed) # random seed
net = get_cell_based_tiny_net(
dict2config(
{
"name": "infer.tiny",
"C": arch_config["channel"],
"N": arch_config["num_cells"],
"genotype": arch,
"num_classes": config.class_num,
},
None,
)
)
# net = TinyNetwork(arch_config['channel'], arch_config['num_cells'], arch, config.class_num)
flop, param = get_model_infos(net, config.xshape)
logger.log("Network : {:}".format(net.get_message()), False)
logger.log(
"{:} Seed-------------------------- {:} --------------------------".format(
time_string(), seed
)
)
logger.log("FLOP = {:} MB, Param = {:} MB".format(flop, param))
# train and valid
optimizer, scheduler, criterion = get_optim_scheduler(net.parameters(), config)
network, criterion = torch.nn.DataParallel(net).cuda(), criterion.cuda()
# start training
start_time, epoch_time, total_epoch = (
time.time(),
AverageMeter(),
config.epochs + config.warmup,
)
(
train_losses,
train_acc1es,
train_acc5es,
valid_losses,
valid_acc1es,
valid_acc5es,
) = ({}, {}, {}, {}, {}, {})
train_times, valid_times = {}, {}
for epoch in range(total_epoch):
scheduler.update(epoch, 0.0)
train_loss, train_acc1, train_acc5, train_tm = procedure(
train_loader, network, criterion, scheduler, optimizer, "train"
)
train_losses[epoch] = train_loss
train_acc1es[epoch] = train_acc1
train_acc5es[epoch] = train_acc5
train_times[epoch] = train_tm
with torch.no_grad():
for key, xloder in valid_loaders.items():
valid_loss, valid_acc1, valid_acc5, valid_tm = procedure(
xloder, network, criterion, None, None, "valid"
)
valid_losses["{:}@{:}".format(key, epoch)] = valid_loss
valid_acc1es["{:}@{:}".format(key, epoch)] = valid_acc1
valid_acc5es["{:}@{:}".format(key, epoch)] = valid_acc5
valid_times["{:}@{:}".format(key, epoch)] = valid_tm
# measure elapsed time
epoch_time.update(time.time() - start_time)
start_time = time.time()
need_time = "Time Left: {:}".format(
convert_secs2time(epoch_time.avg * (total_epoch - epoch - 1), True)
)
logger.log(
"{:} {:} epoch={:03d}/{:03d} :: Train [loss={:.5f}, acc@1={:.2f}%, acc@5={:.2f}%] Valid [loss={:.5f}, acc@1={:.2f}%, acc@5={:.2f}%]".format(
time_string(),
need_time,
epoch,
total_epoch,
train_loss,
train_acc1,
train_acc5,
valid_loss,
valid_acc1,
valid_acc5,
)
)
info_seed = {
"flop": flop,
"param": param,
"channel": arch_config["channel"],
"num_cells": arch_config["num_cells"],
"config": config._asdict(),
"total_epoch": total_epoch,
"train_losses": train_losses,
"train_acc1es": train_acc1es,
"train_acc5es": train_acc5es,
"train_times": train_times,
"valid_losses": valid_losses,
"valid_acc1es": valid_acc1es,
"valid_acc5es": valid_acc5es,
"valid_times": valid_times,
"net_state_dict": net.state_dict(),
"net_string": "{:}".format(net),
"finish-train": True,
}
return info_seed
def pure_evaluate(xloader, network, criterion=torch.nn.CrossEntropyLoss()):
data_time, batch_time, batch = AverageMeter(), AverageMeter(), None
losses, top1, top5 = AverageMeter(), AverageMeter(), AverageMeter()
latencies = []
network.eval()
with torch.no_grad():
end = time.time()
for i, (inputs, targets) in enumerate(xloader):
targets = targets.cuda(non_blocking=True)
inputs = inputs.cuda(non_blocking=True)
data_time.update(time.time() - end)
# forward
features, logits = network(inputs)
loss = criterion(logits, targets)
batch_time.update(time.time() - end)
if batch is None or batch == inputs.size(0):
batch = inputs.size(0)
latencies.append(batch_time.val - data_time.val)
# record loss and accuracy
prec1, prec5 = obtain_accuracy(logits.data, targets.data, topk=(1, 5))
losses.update(loss.item(), inputs.size(0))
top1.update(prec1.item(), inputs.size(0))
top5.update(prec5.item(), inputs.size(0))
end = time.time()
if len(latencies) > 2:
latencies = latencies[1:]
return losses.avg, top1.avg, top5.avg, latencies
def procedure(xloader, network, criterion, scheduler, optimizer, mode):
losses, top1, top5 = AverageMeter(), AverageMeter(), AverageMeter()
if mode == "train":
network.train()
elif mode == "valid":
network.eval()
else:
raise ValueError("The mode is not right : {:}".format(mode))
data_time, batch_time, end = AverageMeter(), AverageMeter(), time.time()
for i, (inputs, targets) in enumerate(xloader):
if mode == "train":
scheduler.update(None, 1.0 * i / len(xloader))
targets = targets.cuda(non_blocking=True)
if mode == "train":
optimizer.zero_grad()
# forward
features, logits = network(inputs)
loss = criterion(logits, targets)
# backward
if mode == "train":
loss.backward()
optimizer.step()
# record loss and accuracy
prec1, prec5 = obtain_accuracy(logits.data, targets.data, topk=(1, 5))
losses.update(loss.item(), inputs.size(0))
top1.update(prec1.item(), inputs.size(0))
top5.update(prec5.item(), inputs.size(0))
# count time
batch_time.update(time.time() - end)
end = time.time()
return losses.avg, top1.avg, top5.avg, batch_time.sum
def pure_evaluate(xloader, network, criterion=torch.nn.CrossEntropyLoss()):
data_time, batch_time, batch = AverageMeter(), AverageMeter(), None
losses, top1, top5 = AverageMeter(), AverageMeter(), AverageMeter()
latencies = []
network.eval()
with torch.no_grad():
end = time.time()
for i, (inputs, targets) in enumerate(xloader):
targets = targets.cuda(non_blocking=True)
inputs = inputs.cuda(non_blocking=True)
data_time.update(time.time() - end)
# forward
features, logits = network(inputs)
loss = criterion(logits, targets)
batch_time.update(time.time() - end)
if batch is None or batch == inputs.size(0):
batch = inputs.size(0)
latencies.append(batch_time.val - data_time.val)
# record loss and accuracy
prec1, prec5 = obtain_accuracy(logits.data, targets.data, topk=(1, 5))
losses.update(loss.item(), inputs.size(0))
top1.update(prec1.item(), inputs.size(0))
top5.update(prec5.item(), inputs.size(0))
end = time.time()
if len(latencies) > 2:
latencies = latencies[1:]
return losses.avg, top1.avg, top5.avg, latencies
def procedure(xloader, network, criterion, scheduler, optimizer, mode):
losses, top1, top5 = AverageMeter(), AverageMeter(), AverageMeter()
if mode == "train":
network.train()
elif mode == "valid":
network.eval()
else:
raise ValueError("The mode is not right : {:}".format(mode))
data_time, batch_time, end = AverageMeter(), AverageMeter(), time.time()
for i, (inputs, targets) in enumerate(xloader):
if mode == "train":
scheduler.update(None, 1.0 * i / len(xloader))
targets = targets.cuda(non_blocking=True)
if mode == "train":
optimizer.zero_grad()
# forward
features, logits = network(inputs)
loss = criterion(logits, targets)
# backward
if mode == "train":
loss.backward()
optimizer.step()
# record loss and accuracy
prec1, prec5 = obtain_accuracy(logits.data, targets.data, topk=(1, 5))
losses.update(loss.item(), inputs.size(0))
top1.update(prec1.item(), inputs.size(0))
top5.update(prec5.item(), inputs.size(0))
# count time
batch_time.update(time.time() - end)
end = time.time()
return losses.avg, top1.avg, top5.avg, batch_time.sum
def train_single_model(
save_dir, workers, datasets, xpaths, splits, use_less, seeds, model_str, arch_config
):
assert torch.cuda.is_available(), "CUDA is not available."
torch.backends.cudnn.enabled = True
torch.backends.cudnn.deterministic = True
# torch.backends.cudnn.benchmark = True
torch.set_num_threads(workers)
save_dir = (
Path(save_dir)
/ "specifics"
/ "{:}-{:}-{:}-{:}".format(
"LESS" if use_less else "FULL",
model_str,
arch_config["channel"],
arch_config["num_cells"],
)
)
logger = Logger(str(save_dir), 0, False)
print(CellArchitectures)
if model_str in CellArchitectures:
arch = CellArchitectures[model_str]
logger.log(
"The model string is found in pre-defined architecture dict : {:}".format(
model_str
)
)
else:
try:
arch = CellStructure.str2structure(model_str)
except:
raise ValueError(
"Invalid model string : {:}. It can not be found or parsed.".format(
model_str
)
)
assert arch.check_valid_op(
get_search_spaces("cell", "nas-bench-201")
), "{:} has the invalid op.".format(arch)
logger.log("Start train-evaluate {:}".format(arch.tostr()))
logger.log("arch_config : {:}".format(arch_config))
start_time, seed_time = time.time(), AverageMeter()
for _is, seed in enumerate(seeds):
logger.log(
"\nThe {:02d}/{:02d}-th seed is {:} ----------------------<.>----------------------".format(
_is, len(seeds), seed
)
)
to_save_name = save_dir / "seed-{:04d}.pth".format(seed)
if to_save_name.exists():
logger.log(
"Find the existing file {:}, directly load!".format(to_save_name)
)
checkpoint = torch.load(to_save_name)
else:
logger.log(
"Does not find the existing file {:}, train and evaluate!".format(
to_save_name
)
)
checkpoint = evaluate_all_datasets(
arch,
datasets,
xpaths,
splits,
use_less,
seed,
arch_config,
workers,
logger,
)
torch.save(checkpoint, to_save_name)
# log information
logger.log("{:}".format(checkpoint["info"]))
all_dataset_keys = checkpoint["all_dataset_keys"]
for dataset_key in all_dataset_keys:
logger.log(
"\n{:} dataset : {:} {:}".format("-" * 15, dataset_key, "-" * 15)
)
dataset_info = checkpoint[dataset_key]
# logger.log('Network ==>\n{:}'.format( dataset_info['net_string'] ))
logger.log(
"Flops = {:} MB, Params = {:} MB".format(
dataset_info["flop"], dataset_info["param"]
)
)
logger.log("config : {:}".format(dataset_info["config"]))
logger.log(
"Training State (finish) = {:}".format(dataset_info["finish-train"])
)
last_epoch = dataset_info["total_epoch"] - 1
train_acc1es, train_acc5es = (
dataset_info["train_acc1es"],
dataset_info["train_acc5es"],
)
valid_acc1es, valid_acc5es = (
dataset_info["valid_acc1es"],
dataset_info["valid_acc5es"],
)
print(dataset_info["train_acc1es"])
print(dataset_info["train_acc5es"])
print(dataset_info["valid_acc1es"])
print(dataset_info["valid_acc5es"])
logger.log(
"Last Info : Train = Acc@1 {:.2f}% Acc@5 {:.2f}% Error@1 {:.2f}%, Test = Acc@1 {:.2f}% Acc@5 {:.2f}% Error@1 {:.2f}%".format(
train_acc1es[last_epoch],
train_acc5es[last_epoch],
100 - train_acc1es[last_epoch],
valid_acc1es['ori-test@'+str(last_epoch)],
valid_acc5es['ori-test@'+str(last_epoch)],
100 - valid_acc1es['ori-test@'+str(last_epoch)],
)
)
# measure elapsed time
seed_time.update(time.time() - start_time)
start_time = time.time()
need_time = "Time Left: {:}".format(
convert_secs2time(seed_time.avg * (len(seeds) - _is - 1), True)
)
logger.log(
"\n<<<***>>> The {:02d}/{:02d}-th seed is {:} <finish> other procedures need {:}".format(
_is, len(seeds), seed, need_time
)
)
logger.close()
# |nor_conv_3x3~0|+|nor_conv_1x1~0|nor_conv_3x3~1|+|skip_connect~0|nor_conv_3x3~1|nor_conv_3x3~2|
train_strs = [best_cifar_10_str, best_cifar_100_str, best_ImageNet16_str]
train_single_model(
save_dir="./outputs",
workers=8,
datasets=["ImageNet16-120"],
xpaths="./datasets/imagenet16-120",
splits=[0, 0, 0],
use_less=False,
seeds=[777],
model_str=best_ImageNet16_str,
arch_config={"channel": 16, "num_cells": 8},)
train_single_model(
save_dir="./outputs",
workers=8,
datasets=["cifar10"],
xpaths="./datasets/cifar10",
splits=[0, 0, 0],
use_less=False,
seeds=[777],
model_str=best_cifar_10_str,
arch_config={"channel": 16, "num_cells": 8},)
train_single_model(
save_dir="./outputs",
workers=8,
datasets=["cifar100"],
xpaths="./datasets/cifar100",
splits=[0, 0, 0],
use_less=False,
seeds=[777],
model_str=best_cifar_100_str,
arch_config={"channel": 16, "num_cells": 8},)

View File

@@ -24,6 +24,8 @@ Dataset2Class = {
"ImageNet16-150": 150,
"ImageNet16-120": 120,
"ImageNet16-200": 200,
"aircraft": 100,
"oxford": 102
}
@@ -109,6 +111,12 @@ def get_datasets(name, root, cutout):
elif name.startswith("ImageNet16"):
mean = [x / 255 for x in [122.68, 116.66, 104.01]]
std = [x / 255 for x in [63.22, 61.26, 65.09]]
elif name == 'aircraft':
mean = [0.4785, 0.5100, 0.5338]
std = [0.1845, 0.1830, 0.2060]
elif name == 'oxford':
mean = [0.4811, 0.4492, 0.3957]
std = [0.2260, 0.2231, 0.2249]
else:
raise TypeError("Unknow dataset : {:}".format(name))
@@ -127,6 +135,13 @@ def get_datasets(name, root, cutout):
[transforms.ToTensor(), transforms.Normalize(mean, std)]
)
xshape = (1, 3, 32, 32)
elif name.startswith("aircraft") or name.startswith("oxford"):
lists = [transforms.RandomCrop(16, padding=0), transforms.ToTensor(), transforms.Normalize(mean, std)]
if cutout > 0:
lists += [CUTOUT(cutout)]
train_transform = transforms.Compose(lists)
test_transform = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize(mean, std)])
xshape = (1, 3, 16, 16)
elif name.startswith("ImageNet16"):
lists = [
transforms.RandomHorizontalFlip(),
@@ -207,6 +222,10 @@ def get_datasets(name, root, cutout):
root, train=False, transform=test_transform, download=True
)
assert len(train_data) == 50000 and len(test_data) == 10000
elif name == "aircraft":
train_data = dset.ImageFolder(root='/lustre/hpe/ws11/ws11.1/ws/xmuhanma-SWAP/train_datasets/datasets/fgvc-aircraft-2013b/data/train_sorted_image', transform=train_transform)
test_data = dset.ImageFolder(root='/lustre/hpe/ws11/ws11.1/ws/xmuhanma-SWAP/train_datasets/datasets/fgvc-aircraft-2013b/data/train_sorted_image', transform=test_transform)
elif name.startswith("imagenet-1k"):
train_data = dset.ImageFolder(osp.join(root, "train"), train_transform)
test_data = dset.ImageFolder(osp.join(root, "val"), test_transform)

View File

@@ -156,7 +156,7 @@ class Logger(object):
hist.max = float(np.max(values))
hist.num = int(np.prod(values.shape))
hist.sum = float(np.sum(values))
hist.sum_squares = float(np.sum(values ** 2))
hist.sum_squares = float(np.sum(values**2))
# Drop the start of the first bin
bin_edges = bin_edges[1:]

View File

@@ -347,6 +347,10 @@ class GenericNAS201Model(nn.Module):
feature = cell.forward_gdas(feature, alphas, index)
if self.verbose:
verbose_str += "-forward_gdas"
elif self.mode == "gdas_v1":
feature = cell.forward_gdas_v1(feature, alphas, index)
if self.verbose:
verbose_str += "-forward_gdas_v1"
else:
raise ValueError("invalid mode={:}".format(self.mode))
else:

View File

@@ -213,6 +213,13 @@ AllConv3x3_CODE = Structure(
(("nor_conv_3x3", 0), ("nor_conv_3x3", 1), ("nor_conv_3x3", 2)),
] # node-3
)
Number_5374 = Structure(
[
(("nor_conv_3x3", 0),), # node-1
(("nor_conv_1x1", 0), ("nor_conv_3x3", 1)), # node-2
(("skip_connect", 0), ("none", 1), ("nor_conv_3x3", 2)), # node-3
]
)
AllFull_CODE = Structure(
[
@@ -271,4 +278,5 @@ architectures = {
"all_c1x1": AllConv1x1_CODE,
"all_idnt": AllIdentity_CODE,
"all_full": AllFull_CODE,
"5374": Number_5374,
}

View File

@@ -85,6 +85,20 @@ class NAS201SearchCell(nn.Module):
nodes.append(sum(inter_nodes))
return nodes[-1]
# GDAS Variant: https://github.com/D-X-Y/AutoDL-Projects/issues/119
def forward_gdas_v1(self, inputs, hardwts, index):
nodes = [inputs]
for i in range(1, self.max_nodes):
inter_nodes = []
for j in range(i):
node_str = "{:}<-{:}".format(i, j)
weights = hardwts[self.edge2index[node_str]]
argmaxs = index[self.edge2index[node_str]].item()
weigsum = weights[argmaxs] * self.edges[node_str](nodes[j])
inter_nodes.append(weigsum)
nodes.append(sum(inter_nodes))
return nodes[-1]
# joint
def forward_joint(self, inputs, weightss):
nodes = [inputs]
@@ -152,6 +166,9 @@ class NAS201SearchCell(nn.Module):
return nodes[-1]
# Learning Transferable Architectures for Scalable Image Recognition, CVPR 2018
class MixedOp(nn.Module):
def __init__(self, space, C, stride, affine, track_running_stats):
super(MixedOp, self).__init__()
@@ -167,7 +184,6 @@ class MixedOp(nn.Module):
return sum(w * op(x) for w, op in zip(weights, self._ops))
# Learning Transferable Architectures for Scalable Image Recognition, CVPR 2018
class NASNetSearchCell(nn.Module):
def __init__(
self,

View File

@@ -155,7 +155,7 @@ class ExponentialLR(_LRScheduler):
if self.current_epoch >= self.warmup_epochs:
last_epoch = self.current_epoch - self.warmup_epochs
assert last_epoch >= 0, "invalid last_epoch : {:}".format(last_epoch)
lr = base_lr * (self.gamma ** last_epoch)
lr = base_lr * (self.gamma**last_epoch)
else:
lr = (
self.current_epoch / self.warmup_epochs

View File

@@ -12,6 +12,7 @@ def obtain_accuracy(output, target, topk=(1,)):
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
# correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res

View File

@@ -122,7 +122,7 @@ class ExponentialParamScheduler(ParamScheduler):
self._decay = decay
def __call__(self, where: float) -> float:
return self._start_value * (self._decay ** where)
return self._start_value * (self._decay**where)
class LinearParamScheduler(ParamScheduler):