论文标题

重新访问神经建筑搜索

Revisiting Neural Architecture Search

论文作者

Garg, Anubhav, Saha, Amit Kumar, Dutta, Debo

论文摘要

神经架构搜索(NAS)是制造神经网络构建方式的方法的集合。当前的NAS方法远非从头开始和自动,因为它们使用手动骨干架构或微构建块(单元),与随机基线相比,性能的突破性较小。它们还涉及NAS管道各个组件的重要手动专家。这提出了一个自然的问题 - 当前的NAS方法是否仍然很大程度上取决于搜索空间设计中的手动努力和像NAS出现之前建立模型时所做的那样?在本文中,我们不仅仅是对最先进的表现进行略微改进,而是重新审视了NAS的基本方法,并提出了一种名为Renas的新方法,该方法可以在没有太多人类努力的情况下搜索完整的神经网络,并且距离Automl-nirvana更加近距离。我们的方法从映射到神经网络的完整图开始,并通过平衡探索和开发搜索空间来搜索连接和操作。结果与SOTA性能相比,采用了利用手工制作的块的方法。我们认为,这种方法可能会导致各种网络类型的新NAS策略。

Neural Architecture Search (NAS) is a collection of methods to craft the way neural networks are built. Current NAS methods are far from ab initio and automatic, as they use manual backbone architectures or micro building blocks (cells), which have had minor breakthroughs in performance compared to random baselines. They also involve a significant manual expert effort in various components of the NAS pipeline. This raises a natural question - Are the current NAS methods still heavily dependent on manual effort in the search space design and wiring like it was done when building models before the advent of NAS? In this paper, instead of merely chasing slight improvements over state-of-the-art (SOTA) performance, we revisit the fundamental approach to NAS and propose a novel approach called ReNAS that can search for the complete neural network without much human effort and is a step closer towards AutoML-nirvana. Our method starts from a complete graph mapped to a neural network and searches for the connections and operations by balancing the exploration and exploitation of the search space. The results are on-par with the SOTA performance with methods that leverage handcrafted blocks. We believe that this approach may lead to newer NAS strategies for a variety of network types.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源