Neural architecture search: Difference between revisions

Content deleted Content added
Restored revision 1078724403 by Citation bot (talk): WP:NOTDIRECTORY
NAS Benchmarks: more directory
Line 41:
 
Differentiable NAS has shown to produce competitive results using a fraction of the search-time required by RL-based search methods. For example, FBNet (which is short for Facebook Berkeley Network) demonstrated that supernetwork-based search produces networks that outperform the speed-accuracy tradeoff curve of mNASNet and MobileNetV2 on the ImageNet image-classification dataset. FBNet accomplishes this using over 400x ''less'' search time than was used for mNASNet.<ref name="FBNet">{{cite arXiv|eprint=1812.03443|last1=Wu|first1=Bichen|title=FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search|last2=Dai|first2=Xiaoliang|last3=Zhang|first3=Peizhao|last4=Wang|first4=Yanghan|last5=Sun|first5=Fei|last6=Wu|first6=Yiming|last7=Tian|first7=Yuandong|last8=Vajda|first8=Peter|last9=Jia|first9=Yangqing|last10=Keutzer|first10=Kurt|class=cs.CV|date=24 May 2019}}</ref><ref name="MobileNetV2">{{cite arXiv|eprint=1801.04381|last1=Sandler|first1=Mark|title=MobileNetV2: Inverted Residuals and Linear Bottlenecks|last2=Howard|first2=Andrew|last3=Zhu|first3=Menglong|last4=Zhmoginov|first4=Andrey|last5=Chen|first5=Liang-Chieh|class=cs.CV|year=2018}}</ref><ref>{{Cite web|url=http://sites.ieee.org/scv-cas/files/2019/05/2019-05-22-ieee-co-design-trim.pdf|title=Co-Design of DNNs and NN Accelerators|last=Keutzer|first=Kurt|date=2019-05-22|website=IEEE|url-status=|archive-url=|archive-date=|access-date=2019-09-26}}</ref> Further, SqueezeNAS demonstrated that supernetwork-based NAS produces neural networks that outperform the speed-accuracy tradeoff curve of MobileNetV3 on the Cityscapes semantic segmentation dataset, and SqueezeNAS uses over 100x less search time than was used in the MobileNetV3 authors' RL-based search.<ref name="SqueezeNAS">{{cite arXiv|eprint=1908.01748|last1=Shaw|first1=Albert|title=SqueezeNAS: Fast neural architecture search for faster semantic segmentation|last2=Hunter|first2=Daniel|last3=Iandola|first3=Forrest|last4=Sidhu|first4=Sammy|class=cs.CV|year=2019}}</ref><ref>{{Cite news|url=https://www.eetimes.com/document.asp?doc_id=1335063|title=Does Your AI Chip Have Its Own DNN?|last=Yoshida|first=Junko|date=2019-08-25|work=EE Times|access-date=2019-09-26}}</ref>
 
==NAS Benchmarks==
NAS research is often very computationally expensive which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation.<ref name=":1">Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., & Hutter, F. (2019, May). Nas-bench-101: Towards reproducible neural architecture search. In International Conference on Machine Learning (pp. 7105-7114). PMLR.</ref> Tabular or surrogate NAS benchmarks facilitate more efficient, effective and reproducible research on NAS.
 
Following is the list of the most popular NAS benchmarks:
 
# NAS-Bench-101 <ref name=":1" />
# NAS-Bench-201 <ref>Dong, X., & Yang, Y. (2020). Nas-bench-201: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326.</ref>
# NAS-Bench-1shot1<ref>Zela, A., Siems, J., & Hutter, F. (2020). Nas-bench-1shot1: Benchmarking and dissecting one-shot neural architecture search. arXiv preprint arXiv:2001.10422.</ref>
# NAS-Bench-301 <ref>Siems, J., Zimmer, L., Zela, A., Lukasik, J., Keuper, M. and Hutter, F., 2020. Nas-bench-301 and the case for surrogate benchmarks for neural architecture search. arXiv preprint arXiv:2008.09777.</ref>
# NAS-Bench-ASR <ref>Mehrotra, A., Ramos, A. G. C., Bhattacharya, S., Dudziak, Ł., Vipperla, R., Chau, T., ... & Lane, N. D. (2020, September). Nas-bench-asr: Reproducible neural architecture search for speech recognition. In International Conference on Learning Representations.</ref>
# NAS-Bench-NLP<ref>Klyuchnikov, N., Trofimov, I., Artemova, E., Salnikov, M., Fedorov, M., & Burnaev, E. (2020). NAS-Bench-NLP: neural architecture search benchmark for natural language processing. arXiv preprint arXiv:2006.07116.</ref>
# TransNAS-Bench-101<ref>Duan, Y., Chen, X., Xu, H., Chen, Z., Liang, X., Zhang, T., & Li, Z. (2021). Transnas-bench-101: Improving transferability and generalizability of cross-task neural architecture search. In ''Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition'' (pp. 5251-5260).</ref>
# LC-Bench <ref>Zimmer, L., Lindauer, M., & Hutter, F. (2021). Auto-Pytorch: multi-fidelity metalearning for efficient and robust autoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(9), 3079-3090.</ref>
# NAS-Bench-x11 <ref>Yan, S., White, C., Savani, Y., & Hutter, F. (2021). NAS-Bench-x11 and the Power of Learning Curves. Advances in Neural Information Processing Systems, 34.</ref>
# NAS-Bench-Suite<ref>Mehta, Y., White, C., Zela, A., Krishnakumar, A., Zabergja, G., Moradian, S., ... & Hutter, F. (2022). NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy. arXiv preprint arXiv:2201.13396.</ref>
# HW-NAS-Bench <ref>Li, C., Yu, Z., Fu, Y., Zhang, Y., Zhao, Y., You, H., ... & Lin, Y. (2021). HW-NAS-Bench: Hardware-aware neural architecture search benchmark. arXiv preprint arXiv:2103.10584.</ref>
 
==See also==