Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Deep learning in sentiment analysis: Recent architectures
T Abdullah, A Ahmet - ACM Computing Surveys, 2022 - dl.acm.org
Humans are increasingly integrated with devices that enable the collection of vast
unstructured opinionated data. Accurately analysing subjective information from this data is …
unstructured opinionated data. Accurately analysing subjective information from this data is …
UDALM: Unsupervised domain adaptation through language modeling
In this work we explore Unsupervised Domain Adaptation (UDA) of pretrained language
models for downstream tasks. We introduce UDALM, a fine-tuning procedure, using a mixed …
models for downstream tasks. We introduce UDALM, a fine-tuning procedure, using a mixed …
Cross-domain data augmentation with domain-adaptive language modeling for aspect-based sentiment analysis
Abstract Cross-domain Aspect-Based Sentiment Analysis (ABSA) aims to leverage the
useful knowledge from a source domain to identify aspect-sentiment pairs in sentences from …
useful knowledge from a source domain to identify aspect-sentiment pairs in sentences from …
[PDF][PDF] Cross-domain review generation for aspect-based sentiment analysis
Supervised learning methods have proven to be effective for Aspect-Based Sentiment
Analysis (ABSA). However, the lack of finegrained labeled data hinders their effectiveness in …
Analysis (ABSA). However, the lack of finegrained labeled data hinders their effectiveness in …
Improving self-training for cross-lingual named entity recognition with contrastive and prototype learning
In cross-lingual named entity recognition (NER), self-training is commonly used to bridge the
linguistic gap by training on pseudo-labeled target-language data. However, due to sub …
linguistic gap by training on pseudo-labeled target-language data. However, due to sub …
On the domain adaptation and generalization of pretrained language models: A survey
Recent advances in NLP are brought by a range of large-scale pretrained language models
(PLMs). These PLMs have brought significant performance gains for a range of NLP tasks …
(PLMs). These PLMs have brought significant performance gains for a range of NLP tasks …
AdaSL: An unsupervised domain adaptation framework for Arabic multi-dialectal sequence labeling
Dialectal Arabic (DA) refers to varieties of everyday spoken languages in the Arab world.
These dialects differ according to the country and region of the speaker, and their textual …
These dialects differ according to the country and region of the speaker, and their textual …
Adapt in contexts: Retrieval-augmented domain adaptation via in-context learning
Large language models (LLMs) have showcased their capability with few-shot inference
known as in-context learning. However, in-domain demonstrations are not always readily …
known as in-context learning. However, in-domain demonstrations are not always readily …
BERTologiCoMix: How does code-mixing interact with multilingual BERT?
Abstract Models such as mBERT and XLMR have shown success in solving Code-Mixed
NLP tasks even though they were not exposed to such text during pretraining. Code-Mixed …
NLP tasks even though they were not exposed to such text during pretraining. Code-Mixed …
PDALN: Progressive domain adaptation over a pre-trained model for low-resource cross-domain named entity recognition
Abstract Cross-domain Named Entity Recognition (NER) transfers the NER knowledge from
high-resource domains to the low-resource target domain. Due to limited labeled resources …
high-resource domains to the low-resource target domain. Due to limited labeled resources …