AutoGO: automated computation graph optimization for neural network evolution

M Salameh, K Mills, N Hassanpour… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Optimizing Deep Neural Networks (DNNs) to obtain high-quality models for efficient
real-world deployment has posed multi-faceted challenges to machine learning engineers …

Adaptive search for broad attention based vision transformers

N Li, Y Chen, D Zhao - Neurocomputing, 2025 - Elsevier
Abstract Vision Transformer (ViT) has prevailed among computer vision tasks for its powerful
capability of image representation recently. Frustratingly, the manual design of efficient …

Parzc: Parametric zero-cost proxies for efficient nas

P Dong, L Li, X Pan, Z Wei, X Liu, Q Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advancements in Zero-shot Neural Architecture Search (NAS) highlight the efficacy
of zero-cost proxies in various NAS benchmarks. Several studies propose the automated …

Building Optimal Neural Architectures using Interpretable Knowledge

KG Mills, FX Han, M Salameh, S Lu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Neural Architecture Search is a costly practice. The fact that a search space can
span a vast number of design choices with each architecture evaluation taking nontrivial …

Rethinking neural architecture representation for predictors: Topological encoding in pixel space

C Yu, J Wang, Y Wang, W Ju, C Tang, J Lv - Information Fusion, 2025 - Elsevier
Neural predictors (NPs) aim to swiftly evaluate architectures during the neural architecture
search (NAS) process. Precise evaluations with NPs heavily depend on the representation …

Semi-supervised accuracy predictor-based multi-objective neural architecture search

S **ao, B Zhao, D Liu - Neurocomputing, 2024 - Elsevier
The rise of neural architecture search (NAS) demonstrates the deep exploration between the
neural network architecture and its performance (eg, accuracy). Many NAS methods are …

Fine-Grained Complexity-Driven Latency Predictor in Hardware-Aware Neural Architecture Search using Composite Loss

C Lin, P Yang, C Li, F Cheng, W Lv, Z Wang… - Information Sciences, 2024 - Elsevier
An efficient hardware-aware neural architecture search is crucial for automating the creation
of network architectures that are optimized for resource-limited platforms. However …

SWAP-NAS: Sample-Wise Activation Patterns For Ultra-Fast NAS

Y Peng, A Song, HM Fayek, V Ciesielski… - arxiv preprint arxiv …, 2024 - arxiv.org
Training-free metrics (aka zero-cost proxies) are widely used to avoid resource-intensive
neural network training, especially in Neural Architecture Search (NAS). Recent studies …

Hufu: A Modality-Agnositc Watermarking System for Pre-Trained Transformers via Permutation Equivariance

H Xu, L **ang, X Ma, B Yang, B Li - arxiv preprint arxiv:2403.05842, 2024 - arxiv.org
With the blossom of deep learning models and services, it has become an imperative
concern to safeguard the valuable model parameters from being stolen. Watermarking is …

Permutation Equivariance of Transformers and Its Applications

H Xu, L **ang, H Ye, D Yao… - Proceedings of the …, 2024 - openaccess.thecvf.com
Revolutionizing the field of deep learning Transformer-based models have achieved
remarkable performance in many tasks. Recent research has recognized these models are …