AutoGO: automated computation graph optimization for neural network evolution
Abstract Optimizing Deep Neural Networks (DNNs) to obtain high-quality models for efficient
real-world deployment has posed multi-faceted challenges to machine learning engineers …
real-world deployment has posed multi-faceted challenges to machine learning engineers …
Adaptive search for broad attention based vision transformers
Abstract Vision Transformer (ViT) has prevailed among computer vision tasks for its powerful
capability of image representation recently. Frustratingly, the manual design of efficient …
capability of image representation recently. Frustratingly, the manual design of efficient …
Parzc: Parametric zero-cost proxies for efficient nas
Recent advancements in Zero-shot Neural Architecture Search (NAS) highlight the efficacy
of zero-cost proxies in various NAS benchmarks. Several studies propose the automated …
of zero-cost proxies in various NAS benchmarks. Several studies propose the automated …
Building Optimal Neural Architectures using Interpretable Knowledge
Abstract Neural Architecture Search is a costly practice. The fact that a search space can
span a vast number of design choices with each architecture evaluation taking nontrivial …
span a vast number of design choices with each architecture evaluation taking nontrivial …
Rethinking neural architecture representation for predictors: Topological encoding in pixel space
Neural predictors (NPs) aim to swiftly evaluate architectures during the neural architecture
search (NAS) process. Precise evaluations with NPs heavily depend on the representation …
search (NAS) process. Precise evaluations with NPs heavily depend on the representation …
Semi-supervised accuracy predictor-based multi-objective neural architecture search
The rise of neural architecture search (NAS) demonstrates the deep exploration between the
neural network architecture and its performance (eg, accuracy). Many NAS methods are …
neural network architecture and its performance (eg, accuracy). Many NAS methods are …
Fine-Grained Complexity-Driven Latency Predictor in Hardware-Aware Neural Architecture Search using Composite Loss
An efficient hardware-aware neural architecture search is crucial for automating the creation
of network architectures that are optimized for resource-limited platforms. However …
of network architectures that are optimized for resource-limited platforms. However …
SWAP-NAS: Sample-Wise Activation Patterns For Ultra-Fast NAS
Training-free metrics (aka zero-cost proxies) are widely used to avoid resource-intensive
neural network training, especially in Neural Architecture Search (NAS). Recent studies …
neural network training, especially in Neural Architecture Search (NAS). Recent studies …
Hufu: A Modality-Agnositc Watermarking System for Pre-Trained Transformers via Permutation Equivariance
With the blossom of deep learning models and services, it has become an imperative
concern to safeguard the valuable model parameters from being stolen. Watermarking is …
concern to safeguard the valuable model parameters from being stolen. Watermarking is …
Permutation Equivariance of Transformers and Its Applications
Revolutionizing the field of deep learning Transformer-based models have achieved
remarkable performance in many tasks. Recent research has recognized these models are …
remarkable performance in many tasks. Recent research has recognized these models are …