Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Does a neural network really encode symbolic concepts?
Recently, a series of studies have tried to extract interactions between input variables
modeled by a DNN and define such interactions as concepts encoded by the DNN …
modeled by a DNN and define such interactions as concepts encoded by the DNN …
Towards the difficulty for a deep neural network to learn concepts of different complexities
This paper theoretically explains the intuition that simple concepts are more likely to be
learned by deep neural networks (DNNs) than complex concepts. In fact, recent studies …
learned by deep neural networks (DNNs) than complex concepts. In fact, recent studies …
Defining and quantifying the emergence of sparse concepts in dnns
This paper aims to illustrate the concept-emerging phenomenon in a trained DNN.
Specifically, we find that the inference score of a DNN can be disentangled into the effects of …
Specifically, we find that the inference score of a DNN can be disentangled into the effects of …
Attention-SA: Exploiting Model-approximated Data Semantics for Adversarial Attack
Adversarial Defense of deep neural networks have gained significant attention and there
have been active research efforts on model vulnerabilities for attacking such as gradient …
have been active research efforts on model vulnerabilities for attacking such as gradient …
Data poisoning attacks against conformal prediction
The efficient and theoretically sound uncertainty quantification is crucial for building trust in
deep learning models. This has spurred a growing interest in conformal prediction (CP), a …
deep learning models. This has spurred a growing interest in conformal prediction (CP), a …
Towards the dynamics of a DNN learning symbolic interactions
This study proves the two-phase dynamics of a deep neural network (DNN) learning
interactions. Despite the long disappointing view of the faithfulness of post-hoc explanation …
interactions. Despite the long disappointing view of the faithfulness of post-hoc explanation …
Can we faithfully represent masked states to compute shapley values on a dnn?
Masking some input variables of a deep neural network (DNN) and computing output
changes on the masked input sample represent a typical way to compute attributions of input …
changes on the masked input sample represent a typical way to compute attributions of input …
Where we have arrived in proving the emergence of sparse symbolic concepts in ai models
This study aims to prove the emergence of symbolic concepts (or more precisely, sparse
primitive inference patterns) in well-trained deep neural networks (DNNs). Specifically, we …
primitive inference patterns) in well-trained deep neural networks (DNNs). Specifically, we …
Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
In this paper, we explain the inference logic of large language models (LLMs) as a set of
symbolic concepts. Many recent studies have discovered that traditional DNNs usually …
symbolic concepts. Many recent studies have discovered that traditional DNNs usually …
Why pre-training is beneficial for downstream classification tasks?
X Jiang, X Cheng, Z Li - arxiv preprint arxiv:2410.08455, 2024 - arxiv.org
Pre-training has exhibited notable benefits to downstream tasks by boosting accuracy and
speeding up convergence, but the exact reasons for these benefits still remain unclear. To …
speeding up convergence, but the exact reasons for these benefits still remain unclear. To …