Automating Design of Efficient Models with Neural Architecture Search (Seminar)
Abstract
Neural Architecture Search (NAS) has emerged as a promising field in deep learning, offering automated methods to find more accurate neural network architectures for various tasks. As deep learning models grow in complexity, the demand for not only accurate, but efficient models in terms of hardware resource utilization rises. This paper provides an overview of various NAS techniques with focus on hardware-aware strategies. The evolution of NAS ranging from traditional reinforcement-learning to more recent gradient optimization techniques is explored, while also delving into hardware specific objectives such as latency, model size and their integration into the NAS process. Further the impact of NAS is examined on different applications featuring a recent training-free image classification approach utilizing knowledge distillation (DisWOT) and a differentiable hardware-aware NAS for Image Super-Resolution (EHANAS). This survey highlights the promising advancement as well as open research challenges, while investigating how this relatively new research field manages to automatically find accurate and efficient neural network architectures.
Topic
Explore the latest advances in neural architecture search (NAS) for automating the design of deep neural networks. In particular, learn about NAS techniques used to search for efficient neural networks that work on resource constrained devices like mobile phones.
Tasks
- Learn about NAS, including vanilla and federated NAS, and NAS with reinforcement learning.
- Explore how it is used for creating architectures optimized for small devices.