Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Lightweight deep learning for resource-constrained environments: A survey
Over the past decade, the dominance of deep learning has prevailed across various
domains of artificial intelligence, including natural language processing, computer vision …
domains of artificial intelligence, including natural language processing, computer vision …
A survey of design and optimization for systolic array-based dnn accelerators
In recent years, it has been witnessed that the systolic array is a successful architecture for
DNN hardware accelerators. However, the design of systolic arrays also encountered many …
DNN hardware accelerators. However, the design of systolic arrays also encountered many …
Efficientvit: Memory efficient vision transformer with cascaded group attention
Vision transformers have shown great success due to their high model capabilities.
However, their remarkable performance is accompanied by heavy computation costs, which …
However, their remarkable performance is accompanied by heavy computation costs, which …
Inceptionnext: When inception meets convnext
Inspired by the long-range modeling ability of ViTs large-kernel convolutions are widely
studied and adopted recently to enlarge the receptive field and improve model performance …
studied and adopted recently to enlarge the receptive field and improve model performance …
Scale-aware modulation meet transformer
This paper presents a new vision Transformer, Scale Aware Modulation Transformer (SMT),
that can handle various downstream tasks efficiently by combining the convolutional network …
that can handle various downstream tasks efficiently by combining the convolutional network …
Mobileone: An improved one millisecond mobile backbone
Efficient neural network backbones for mobile devices are often optimized for metrics such
as FLOPs or parameter count. However, these metrics may not correlate well with latency of …
as FLOPs or parameter count. However, these metrics may not correlate well with latency of …
Mobile-former: Bridging mobilenet and transformer
Abstract We present Mobile-Former, a parallel design of MobileNet and transformer with a
two-way bridge in between. This structure leverages the advantages of MobileNet at local …
two-way bridge in between. This structure leverages the advantages of MobileNet at local …
Conv2former: A simple transformer-style convnet for visual recognition
Vision Transformers have been the most popular network architecture in visual recognition
recently due to the strong ability of encode global information. However, its high …
recently due to the strong ability of encode global information. However, its high …
Edgevits: Competing light-weight cnns on mobile devices with vision transformers
Self-attention based models such as vision transformers (ViTs) have emerged as a very
competitive architecture alternative to convolutional neural networks (CNNs) in computer …
competitive architecture alternative to convolutional neural networks (CNNs) in computer …
Efficientnetv2: Smaller models and faster training
This paper introduces EfficientNetV2, a new family of convolutional networks that have faster
training speed and better parameter efficiency than previous models. To develop these …
training speed and better parameter efficiency than previous models. To develop these …