RL or evolution-based NAS require thousands of GPU-days of searching/training to achieve state-of-the-art computer vision results as described in the NASNet, mNASNet and MobileNetV3 papers.<ref name="Zoph 2017" /><ref name="mNASNet2">{{cite arXiv|eprint=1807.11626|last1=Tan|first1=Mingxing|title=MnasNet: Platform-Aware Neural Architecture Search for Mobile|last2=Chen|first2=Bo|last3=Pang|first3=Ruoming|last4=Vasudevan|first4=Vijay|last5=Sandler|first5=Mark|last6=Howard|first6=Andrew|last7=Le|first7=Quoc V.|class=cs.CV|year=2018}}</ref><ref name="MobileNetV3">{{cite arXiv|date=2019-05-06|title=Searching for MobileNetV3|eprint=1905.02244|class=cs.CV|last1=Howard|first1=Andrew|last2=Sandler|first2=Mark|last3=Chu|first3=Grace|last4=Chen|first4=Liang-Chieh|last5=Chen|first5=Bo|last6=Tan|first6=Mingxing|last7=Wang|first7=Weijun|last8=Zhu|first8=Yukun|last9=Pang|first9=Ruoming|last10=Vasudevan|first10=Vijay|last11=Le|first11=Quoc V.|last12=Adam|first12=Hartwig}}</ref>
To reduce computational cost, many recent NAS methods rely on the weight-sharing idea.<ref>Pham,{{cite H.,arxiv Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: [[arxiv:|eprint=1802.03268|Efficient neural architecture search via parameter sharing]]. In: Proceedings of the 35th International Conference on Machine Learning (2018).}}</ref><ref>Li,{{cite L.,arxiv Talwalkar, A.: [[arxiv:|eprint=1902.07638|Random search and reproducibility for neural architecture search]]. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (2019).}}</ref> In this approach, a single overparameterized supernetwork (also known as the one-shot model) is defined. A supernetwork is a very large [[Directed acyclic graph|Directed Acyclic Graph]] (DAG) whose subgraphs are different candidate neural networks. Thus, in a supernetwork, the weights are shared among a large number of different sub-architectures that have edges in common, each of which is considered as a path within the supernet. The essential idea is to train one supernetwork that spans many options for the final design rather than generating and training thousands of networks independently. In addition to the learned parameters, a set of architecture parameters are learnt to depict preference for one module over another. Such methods reduce the required computational resources to only a few GPU days.
More recent works further combine this weight-sharing paradigm, with a continuous relaxation of the search space,<ref>H.{{cite Cai,arxiv L. Zhu, and S. Han. [[arxiv:|eprint=1812.00332|Proxylessnas: Direct neural architecture search on target task and hardware]]. ICLR, 2019.}}</ref><ref>X.{{cite Dongarxiv and Y. Yang. [[arxiv:|eprint=1910.04465|Searching for a robust neural architecture in four gpu hours]]. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2019.}}</ref><ref name="H. Liu, K. Simonyan 1806">H.{{cite Liu,arxiv K. Simonyan, and Y. Yang. [[arxiv:|eprint=1806.09055|Darts: Differentiable architecture search]]. In ICLR, 2019}}</ref><ref>S.{{cite Xie,arxiv H. Zheng, C. Liu, and L. Lin. [[arxiv:|eprint=1812.09926|Snas: stochastic neural architecture search]]. ICLR, 2019.}}</ref> which enables the use of gradient-based optimization methods. These approaches are generally referred to as differentiable NAS and have proven very efficient in exploring the search space of neural architectures. One of the most popular algorithms amongst the gradient-based methods for NAS is DARTS.<ref name="H. Liu, K. Simonyan 1806"/> However, DARTS faces problems such as performance collapse due to an inevitable aggregation of skip connections and poor generalization which were tackled by many future algorithms.<ref>Chu,{{cite Xiangxiangarxiv and Zhou, Tianbao and Zhang, Bo and Li, Jixiang. [[arxiv:|eprint=1911.12126|Fair darts: Eliminating unfair advantages in differentiable architecture search]]. In ECCV, 2020}}</ref><ref name="Arber Zela 1909">Arber{{cite Zela,arxiv Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, Frank Hutter. [[arxiv:|eprint=1909.09656|Understanding and Robustifying Differentiable Architecture Search]]. In ICLR, 2020}}</ref><ref name="Xiangning Chen 2002">Xiangning{{cite Chen,arxiv Cho-Jui Hsieh. [[arxiv:|eprint=2002.05283|Stabilizing Differentiable Architecture Search via Perturbation-based Regularization]]. In ICML, 2020}}</ref><ref>Yuhui{{cite Xu,arxiv Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, Hongkai Xiong. [[arxiv:|eprint=1907.05737|PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search]]. In ICLR, 2020}}</ref> Methods like <ref name="Arber Zela 1909"/><ref name="Xiangning Chen 2002"/> aim at robustifying DARTS and making the validation accuracy landscape smoother by introducing a Hessian norm based regularisation and random smoothing/adversarial attack respectively. The cause of performance degradation is later analyzed from the architecture selection aspect.<ref>Ruochen{{cite Wang,arxiv Minhao Cheng, Xiangning Chen, Xiaocheng Tang, Cho-Jui Hsieh. [[arxiv:|eprint=2108.04392|Rethinking Architecture Selection in Differentiable NAS]]. In ICLR, 2022}}</ref>
Differentiable NAS has shown to produce competitive results using a fraction of the search-time required by RL-based search methods. For example, FBNet (which is short for Facebook Berkeley Network) demonstrated that supernetwork-based search produces networks that outperform the speed-accuracy tradeoff curve of mNASNet and MobileNetV2 on the ImageNet image-classification dataset. FBNet accomplishes this using over 400x ''less'' search time than was used for mNASNet.<ref name="FBNet">{{cite arXiv|eprint=1812.03443|last1=Wu|first1=Bichen|title=FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search|last2=Dai|first2=Xiaoliang|last3=Zhang|first3=Peizhao|last4=Wang|first4=Yanghan|last5=Sun|first5=Fei|last6=Wu|first6=Yiming|last7=Tian|first7=Yuandong|last8=Vajda|first8=Peter|last9=Jia|first9=Yangqing|last10=Keutzer|first10=Kurt|class=cs.CV|date=24 May 2019}}</ref><ref name="MobileNetV2">{{cite arXiv|eprint=1801.04381|last1=Sandler|first1=Mark|title=MobileNetV2: Inverted Residuals and Linear Bottlenecks|last2=Howard|first2=Andrew|last3=Zhu|first3=Menglong|last4=Zhmoginov|first4=Andrey|last5=Chen|first5=Liang-Chieh|class=cs.CV|year=2018}}</ref><ref>{{Cite web|url=http://sites.ieee.org/scv-cas/files/2019/05/2019-05-22-ieee-co-design-trim.pdf|title=Co-Design of DNNs and NN Accelerators|last=Keutzer|first=Kurt|date=2019-05-22|website=IEEE|url-status=|archive-url=|archive-date=|access-date=2019-09-26}}</ref> Further, SqueezeNAS demonstrated that supernetwork-based NAS produces neural networks that outperform the speed-accuracy tradeoff curve of MobileNetV3 on the Cityscapes semantic segmentation dataset, and SqueezeNAS uses over 100x less search time than was used in the MobileNetV3 authors' RL-based search.<ref name="SqueezeNAS">{{cite arXiv|eprint=1908.01748|last1=Shaw|first1=Albert|title=SqueezeNAS: Fast neural architecture search for faster semantic segmentation|last2=Hunter|first2=Daniel|last3=Iandola|first3=Forrest|last4=Sidhu|first4=Sammy|class=cs.CV|year=2019}}</ref><ref>{{Cite news|url=https://www.eetimes.com/document.asp?doc_id=1335063|title=Does Your AI Chip Have Its Own DNN?|last=Yoshida|first=Junko|date=2019-08-25|work=EE Times|access-date=2019-09-26}}</ref>
== Neural architecture search benchmarks ==
Neural architecture search often requires large computational resources, due to its expensive training and evaluation phases. This further leads to a large carbon footprint required for the evaluation of these methods. To overcome this limitation, NAS benchmarks<ref>Ying,{{cite C.,arxiv Klein, A., Christiansen, E., Real, E., Murphy, K. and Hutter, F., 2019, May. Nas-bench-101: [[arxiv:|eprint=1902.09635|Towards reproducible neural architecture search]]. In ''International Conference on Machine Learning'' (pp. 7105-7114). PMLR.}}</ref><ref>Zela,{{cite A.,arxiv Siems, J. and Hutter, F., 2020. Nas-bench-1shot1: Benchmarking and dissecting one-shot neural architecture search. ''arXiv preprint [[arXiv:|eprint=2001.10422]]''.}}</ref><ref>Dong,{{cite X.arxiv and Yang, Y., 2020. Nas-bench-201: Extending the scope of reproducible neural architecture search. ''arXiv preprint [[arXiv:|eprint=2001.00326]]''.}}</ref><ref>Siems,{{cite J.,arxiv Zimmer, L., Zela, A., Lukasik, J., Keuper, M. and Hutter, F., 2020. Nas-bench-301 and the case for surrogate benchmarks for neural architecture search. ''arXiv preprint [[arXiv:|eprint=2008.09777]]''.}}</ref> have been introduced, from which one can either query or predict the final performance of neural architectures in seconds. A NAS benchmark is defined as a dataset with a fixed train-test split, a search space, and a fixed training pipeline (hyperparameters). There are primarily two types of NAS benchmarks: a surrogate NAS benchmark and a tabular NAS benchmark. A surrogate benchmark uses a surrogate model (e.g.: a neural network) to predict the performance of an architecture from the search space. On the other hand, a tabular benchmark queries the actual performance of an architecture trained up to convergence. Both of these benchmarks are queryable and can be used to efficiently simulate many NAS algorithms using only a CPU to query the benchmark instead of training an architecture from scratch.
==See also==
|