Towards artificial general intelligence (agi) in the internet of things (iot): Opportunities and challenges

F Dou, J Ye, G Yuan, Q Lu, W Niu, H Sun… - arxiv preprint arxiv …, 2023 - arxiv.org
Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and
execute tasks with human cognitive abilities, engenders significant anticipation and intrigue …

Emergence and causality in complex systems: a survey of causal emergence and related quantitative studies

B Yuan, J Zhang, A Lyu, J Wu, Z Wang, M Yang, K Liu… - Entropy, 2024 - mdpi.com
Emergence and causality are two fundamental concepts for understanding complex
systems. They are interconnected. On one hand, emergence refers to the phenomenon …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

[PDF][PDF] Learning invariant graph representations for out-of-distribution generalization

H Li, Z Zhang, X Wang, W Zhu - Advances in Neural …, 2022 - proceedings.neurips.cc
Graph representation learning has shown effectiveness when testing and training graph
data come from the same distribution, but most existing approaches fail to generalize under …

Invariance principle meets information bottleneck for out-of-distribution generalization

K Ahuja, E Caballero, D Zhang… - Advances in …, 2021 - proceedings.neurips.cc
The invariance principle from causality is at the heart of notable approaches such as
invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) …

Out-of-distribution generalization via risk extrapolation (rex)

D Krueger, E Caballero, JH Jacobsen… - International …, 2021 - proceedings.mlr.press
Distributional shift is one of the major obstacles when transferring machine learning
prediction systems from the lab to the real world. To tackle this problem, we assume that …

Improving out-of-distribution robustness via selective augmentation

H Yao, Y Wang, S Li, L Zhang… - International …, 2022 - proceedings.mlr.press
Abstract Machine learning algorithms typically assume that training and test examples are
drawn from the same distribution. However, distribution shift is a common problem in real …

Learning causally invariant representations for out-of-distribution generalization on graphs

Y Chen, Y Zhang, Y Bian, H Yang… - Advances in …, 2022 - proceedings.neurips.cc
Despite recent success in using the invariance principle for out-of-distribution (OOD)
generalization on Euclidean data (eg, images), studies on graph data are still limited …

Gradient matching for domain generalization

Y Shi, J Seely, PHS Torr, N Siddharth… - arxiv preprint arxiv …, 2021 - arxiv.org
Machine learning systems typically assume that the distributions of training and test sets
match closely. However, a critical requirement of such systems in the real world is their …

Heterogeneous risk minimization

J Liu, Z Hu, P Cui, B Li, Z Shen - International Conference on …, 2021 - proceedings.mlr.press
Abstract Machine learning algorithms with empirical risk minimization usually suffer from
poor generalization performance due to the greedy exploitation of correlations among the …