Software testing with large language models: Survey, landscape, and vision
Pre-trained large language models (LLMs) have recently emerged as a breakthrough
technology in natural language processing and artificial intelligence, with the ability to …
technology in natural language processing and artificial intelligence, with the ability to …
Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2024 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
Graphgpt: Graph instruction tuning for large language models
Graph Neural Networks (GNNs) have evolved to understand graph structures through
recursive exchanges and aggregations among nodes. To enhance robustness, self …
recursive exchanges and aggregations among nodes. To enhance robustness, self …
All in one and one for all: A simple yet effective method towards cross-domain graph pretraining
Large Language Models (LLMs) have revolutionized the fields of computer vision (CV) and
natural language processing (NLP). One of the most notable advancements of LLMs is that a …
natural language processing (NLP). One of the most notable advancements of LLMs is that a …
Towards graph foundation models: A survey and beyond
Emerging as fundamental building blocks for diverse artificial intelligence applications,
foundation models have achieved notable success across natural language processing and …
foundation models have achieved notable success across natural language processing and …
Universal prompt tuning for graph neural networks
In recent years, prompt tuning has sparked a research surge in adapting pre-trained models.
Unlike the unified pre-training strategy employed in the language field, the graph field …
Unlike the unified pre-training strategy employed in the language field, the graph field …
Zerog: Investigating cross-dataset zero-shot transferability in graphs
With the development of foundation models such as large language models, zero-shot
transfer learning has become increasingly significant. This is highlighted by the generative …
transfer learning has become increasingly significant. This is highlighted by the generative …
Lovász principle for unsupervised graph representation learning
This paper focuses on graph-level representation learning that aims to represent graphs as
vectors that can be directly utilized in downstream tasks such as graph classification. We …
vectors that can be directly utilized in downstream tasks such as graph classification. We …
Hgprompt: Bridging homogeneous and heterogeneous graphs for few-shot prompt learning
Graph neural networks (GNNs) and heterogeneous graph neural networks (HGNNs) are
prominent techniques for homogeneous and heterogeneous graph representation learning …
prominent techniques for homogeneous and heterogeneous graph representation learning …
Graph prompt learning: A comprehensive survey and beyond
Artificial General Intelligence (AGI) has revolutionized numerous fields, yet its integration
with graph data, a cornerstone in our interconnected world, remains nascent. This paper …
with graph data, a cornerstone in our interconnected world, remains nascent. This paper …