[PDF][PDF] Levels of AGI for Operationalizing Progress on the Path to AGI

MR Morris, J Sohl-Dickstein, N Fiedel… - arxiv preprint arxiv …, 2023 - foreveryscale.com
Artificial General Intelligence (AGI) 1 is an important and sometimes controversial concept in
computing research, used to describe an AI system that is at least as capable as a human at …

Position: Levels of AGI for operationalizing progress on the path to AGI

MR Morris, J Sohl-Dickstein, N Fiedel… - … on Machine Learning, 2024 - openreview.net
We propose a framework for classifying the capabilities and behavior of Artificial General
Intelligence (AGI) models and their precursors. This framework introduces levels of AGI …

Spatial-frequency channels, shape bias, and adversarial robustness

A Subramanian, E Sizikova… - Advances in neural …, 2023 - proceedings.neurips.cc
What spatial frequency information do humans and neural networks use to recognize
objects? In neuroscience, critical band masking is an established tool that can reveal the …

Adversarial robustness limits via scaling-law and human-alignment studies

BR Bartoldson, J Diffenderfer, K Parasyris… - arxiv preprint arxiv …, 2024 - arxiv.org
This paper revisits the simple, long-studied, yet still unsolved problem of making image
classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA …

Discrete approximations of Gaussian smoothing and Gaussian derivatives

T Lindeberg - Journal of Mathematical Imaging and Vision, 2024 - Springer
This paper develops an in-depth treatment concerning the problem of approximating the
Gaussian smoothing and the Gaussian derivative computations in scale-space theory for …

Spatial-Frequency Discriminability for Revealing Adversarial Perturbations

C Wang, S Qi, Z Huang, Y Zhang, R Lan… - … on Circuits and …, 2024 - ieeexplore.ieee.org
The vulnerability of deep neural networks to adversarial perturbations has been widely
perceived in the computer vision community. From a security perspective, it poses a critical …

NeuroAI for AI Safety

P Mineault, N Zanichelli, JZ Peng, A Arkhipov… - arxiv preprint arxiv …, 2024 - arxiv.org
As AI systems become increasingly powerful, the need for safe AI has become more
pressing. Humans are an attractive model for AI safety: as the only known agents capable of …

[HTML][HTML] Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models

Y Chaudhary, J Penn - Harvard Data Science Review, 2024 - hdsr.mitpress.mit.edu
The rapid proliferation of large language models (LLMs) invites the possibility of a new
marketplace for behavioral and psychological data that signals intent. This brief article …

Robust Detection of Out-of-Distribution Data

J Bitterwolf - 2025 - tobias-lib.ub.uni-tuebingen.de
Deep neural networks, trained on large amounts of data, have become a highly successful
tool for a variety of cognitive tasks. In many of those, they exceed human performance in …

[PDF][PDF] Rethinking Adversarial Examples

Y Jabary - 2025 - sueszli.github.io
Traditionally, adversarial examples have been defined as imperceptible perturbations that
fool deep neural networks. This thesis challenges this view by examining unrestricted …