[HTML][HTML] A systematic review on media bias detection: What is media bias, how it is expressed, and how to detect it
Media bias and the intolerance of media outlets and citizens to deal with opposing points of
view pose a threat to the proper functioning of democratic processes. In this respect, we …
view pose a threat to the proper functioning of democratic processes. In this respect, we …
Data selection for language models via importance resampling
Selecting a suitable pretraining dataset is crucial for both general-domain (eg, GPT-3) and
domain-specific (eg, Codex) language models (LMs). We formalize this problem as selecting …
domain-specific (eg, Codex) language models (LMs). We formalize this problem as selecting …
Big bird: Transformers for longer sequences
Transformers-based models, such as BERT, have been one of the most successful deep
learning models for NLP. Unfortunately, one of their core limitations is the quadratic …
learning models for NLP. Unfortunately, one of their core limitations is the quadratic …
Don't stop pretraining: Adapt language models to domains and tasks
Language models pretrained on text from a wide variety of sources form the foundation of
today's NLP. In light of the success of these broad-coverage models, we investigate whether …
today's NLP. In light of the success of these broad-coverage models, we investigate whether …
Longformer: The long-document transformer
Transformer-based models are unable to process long sequences due to their self-attention
operation, which scales quadratically with the sequence length. To address this limitation …
operation, which scales quadratically with the sequence length. To address this limitation …
Recurrent memory transformer
Transformer-based models show their effectiveness across multiple domains and tasks. The
self-attention allows to combine information from all sequence elements into context-aware …
self-attention allows to combine information from all sequence elements into context-aware …
Muppet: Massive multi-task representations with pre-finetuning
We propose pre-finetuning, an additional large-scale learning stage between language
model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around …
model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around …
The media bias taxonomy: A systematic literature review on the forms and automated detection of media bias
The way the media presents events can significantly affect public perception, which in turn
can alter people's beliefs and views. Media bias describes a one-sided or polarizing …
can alter people's beliefs and views. Media bias describes a one-sided or polarizing …
We can detect your bias: Predicting the political ideology of news articles
We explore the task of predicting the leading political ideology or bias of news articles. First,
we collect and release a large dataset of 34,737 articles that were manually annotated for …
we collect and release a large dataset of 34,737 articles that were manually annotated for …
Black-box prompt learning for pre-trained language models
The increasing scale of general-purpose Pre-trained Language Models (PLMs) necessitates
the study of more efficient adaptation across different downstream tasks. In this paper, we …
the study of more efficient adaptation across different downstream tasks. In this paper, we …