Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Analysis methods in neural language processing: A survey
The field of natural language processing has seen impressive progress in recent years, with
neural network models replacing many of the traditional systems. A plethora of new models …
neural network models replacing many of the traditional systems. A plethora of new models …
Pretraining with artificial language: Studying transferable knowledge in language models
We investigate what kind of structural knowledge learned in neural network encoders is
transferable to processing natural language. We design artificial languages with structural …
transferable to processing natural language. We design artificial languages with structural …
[HTML][HTML] What do end-to-end speech models learn about speaker, language and channel information? a layer-wise and neuron-level analysis
Deep neural networks are inherently opaque and challenging to interpret. Unlike hand-
crafted feature-based models, we struggle to comprehend the concepts learned and how …
crafted feature-based models, we struggle to comprehend the concepts learned and how …
On evaluating the generalization of LSTM models in formal languages
Recurrent Neural Networks (RNNs) are theoretically Turing-complete and established
themselves as a dominant model for language processing. Yet, there still remains an …
themselves as a dominant model for language processing. Yet, there still remains an …
LSTMs compose (and learn) bottom-up
Recent work in NLP shows that LSTM language models capture hierarchical structure in
language data. In contrast to existing work, we consider the\textit {learning} process that …
language data. In contrast to existing work, we consider the\textit {learning} process that …
Diversity as a by-product: Goal-oriented language generation leads to linguistic variation
The ability for variation in language use is necessary for speakers to achieve their
conversational goals, for instance when referring to objects in visual environments. We …
conversational goals, for instance when referring to objects in visual environments. We …
Automatically Extracting Challenge Sets for Non local Phenomena in Neural Machine Translation
We show that the state of the art Transformer Machine Translation (MT) model is not biased
towards monotonic reordering (unlike previous recurrent neural network models), but that …
towards monotonic reordering (unlike previous recurrent neural network models), but that …
[HTML][HTML] Analogical inference from distributional structure: What recurrent neural networks can tell us about word learning
PA Huebner, JA Willits - Machine Learning with Applications, 2023 - Elsevier
One proposal that can explain the remarkable pace of word learning in young children is
that they leverage the language-internal distributional similarity of familiar and novel words …
that they leverage the language-internal distributional similarity of familiar and novel words …
Language models learn POS first
A glut of recent research shows that language models capture linguistic structure. Linzen et
al.(2016) found that LSTM-based language models may encode syntactic information …
al.(2016) found that LSTM-based language models may encode syntactic information …
How LSTM encodes syntax: Exploring context vectors and semi-quantization on natural text
C Shibata, K Uchiumi, D Mochihashi - arxiv preprint arxiv:2010.00363, 2020 - arxiv.org
Long Short-Term Memory recurrent neural network (LSTM) is widely used and known to
capture informative long-term syntactic dependencies. However, how such information are …
capture informative long-term syntactic dependencies. However, how such information are …