ChatGPT-like large-scale foundation models for prognostics and health management: A survey and roadmaps
PHM technology is vital in industrial production and maintenance, identifying and predicting
potential equipment failures and damages. This enables proactive maintenance measures …
potential equipment failures and damages. This enables proactive maintenance measures …
Vision mamba: Efficient visual representation learning with bidirectional state space model
Recently the state space models (SSMs) with efficient hardware-aware designs, ie, the
Mamba deep learning model, have shown great potential for long sequence modeling …
Mamba deep learning model, have shown great potential for long sequence modeling …
One fits all: Power general time series analysis by pretrained lm
Although we have witnessed great success of pre-trained models in natural language
processing (NLP) and computer vision (CV), limited progress has been made for general …
processing (NLP) and computer vision (CV), limited progress has been made for general …
Hornet: Efficient high-order spatial interactions with recursive gated convolutions
Recent progress in vision Transformers exhibits great success in various tasks driven by the
new spatial modeling mechanism based on dot-product self-attention. In this paper, we …
new spatial modeling mechanism based on dot-product self-attention. In this paper, we …
Scaling up your kernels to 31x31: Revisiting large kernel design in cnns
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by
recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few …
recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few …
Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting
Long-term time series forecasting is challenging since prediction accuracy tends to
decrease dramatically with the increasing horizon. Although Transformer-based methods …
decrease dramatically with the increasing horizon. Although Transformer-based methods …
A convnet for the 2020s
The" Roaring 20s" of visual recognition began with the introduction of Vision Transformers
(ViTs), which quickly superseded ConvNets as the state-of-the-art image classification …
(ViTs), which quickly superseded ConvNets as the state-of-the-art image classification …
Maxim: Multi-axis mlp for image processing
Recent progress on Transformers and multi-layer perceptron (MLP) models provide new
network architectural designs for computer vision tasks. Although these models proved to be …
network architectural designs for computer vision tasks. Although these models proved to be …
Metaformer is actually what you need for vision
Transformers have shown great potential in computer vision tasks. A common belief is their
attention-based token mixer module contributes most to their competence. However, recent …
attention-based token mixer module contributes most to their competence. However, recent …
Denseclip: Language-guided dense prediction with context-aware prompting
Recent progress has shown that large-scale pre-training using contrastive image-text pairs
can be a promising alternative for high-quality visual representation learning from natural …
can be a promising alternative for high-quality visual representation learning from natural …