Towards understanding generalization of graph neural networks
H Tang, Y Liu - International Conference on Machine …, 2023 - proceedings.mlr.press
Graph neural networks (GNNs) are widely used in machine learning for graph-structured
data. Even though GNNs have achieved remarkable success in real-world applications …
data. Even though GNNs have achieved remarkable success in real-world applications …
Wl meet vc
Recently, many works studied the expressive power of graph neural networks (GNNs) by
linking it to the $1 $-dimensional Weisfeiler-Leman algorithm ($1\text {-}\mathsf {WL} $) …
linking it to the $1 $-dimensional Weisfeiler-Leman algorithm ($1\text {-}\mathsf {WL} $) …
Multi-sensor fusion fault diagnosis method of wind turbine bearing based on adaptive convergent viewable neural networks
Effective condition monitoring and fault diagnosis of rolling bearings, integral components of
rotating machinery, are crucial for ensuring equipment reliability. However, existing …
rotating machinery, are crucial for ensuring equipment reliability. However, existing …
Graph neural networks for road safety modeling: datasets and evaluations for accident analysis
We consider the problem of traffic accident analysis on a road network based on road
network connections and traffic volume. Previous works have designed various deep …
network connections and traffic volume. Previous works have designed various deep …
GFT: Graph Foundation Model with Transferable Tree Vocabulary
Inspired by the success of foundation models in applications such as ChatGPT, as graph
data has been ubiquitous, one can envision the far-reaching impacts that can be brought by …
data has been ubiquitous, one can envision the far-reaching impacts that can be brought by …
Weisfeiler-Leman at the margin: When more expressivity matters
The Weisfeiler-Leman algorithm ($1 $-WL) is a well-studied heuristic for the graph
isomorphism problem. Recently, the algorithm has played a prominent role in understanding …
isomorphism problem. Recently, the algorithm has played a prominent role in understanding …
Identification of negative transfers in multitask learning using surrogate models
Multitask learning is widely used in practice to train a low-resource target task by
augmenting it with multiple related source tasks. Yet, naively combining all the source tasks …
augmenting it with multiple related source tasks. Yet, naively combining all the source tasks …
Analyzing deep pac-bayesian learning with neural tangent kernel: Convergence, analytic generalization bound, and efficient hyperparameter selection
PAC-Bayes is a well-established framework for analyzing generalization performance in
machine learning models. This framework provides a bound on the expected population …
machine learning models. This framework provides a bound on the expected population …
Scalable Multitask Learning Using Gradient-based Estimation of Task Affinity
Multitask learning is a widely used paradigm for training models on diverse tasks, with
applications ranging from graph neural networks to language model fine-tuning. Since tasks …
applications ranging from graph neural networks to language model fine-tuning. Since tasks …
Generalization Error of Graph Neural Networks in the Mean-field Regime
This work provides a theoretical framework for assessing the generalization error of graph
classification tasks via graph neural networks in the over-parameterized regime, where the …
classification tasks via graph neural networks in the over-parameterized regime, where the …