Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A survey on fairness in large language models
Large Language Models (LLMs) have shown powerful performance and development
prospects and are widely deployed in the real world. However, LLMs can capture social …
prospects and are widely deployed in the real world. However, LLMs can capture social …
The life cycle of large language models in education: A framework for understanding sources of bias
Large language models (LLMs) are increasingly adopted in educational contexts to provide
personalized support to students and teachers. The unprecedented capacity of LLM‐based …
personalized support to students and teachers. The unprecedented capacity of LLM‐based …
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin… - arxiv preprint arxiv …, 2023 - arxiv.org
We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and
reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is …
reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is …
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras… - arxiv preprint arxiv …, 2022 - arxiv.org
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
Biases in large language models: origins, inventory, and discussion
In this article, we introduce and discuss the pervasive issue of bias in the large language
models that are currently at the core of mainstream approaches to Natural Language …
models that are currently at the core of mainstream approaches to Natural Language …
From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair NLP models
S Feng, CY Park, Y Liu, Y Tsvetkov - arxiv preprint arxiv:2305.08283, 2023 - arxiv.org
Language models (LMs) are pretrained on diverse data sources, including news, discussion
forums, books, and online encyclopedias. A significant portion of this data includes opinions …
forums, books, and online encyclopedias. A significant portion of this data includes opinions …
On second thought, let's not think step by step! bias and toxicity in zero-shot reasoning
O Shaikh, H Zhang, W Held, M Bernstein… - arxiv preprint arxiv …, 2022 - arxiv.org
Generating a Chain of Thought (CoT) has been shown to consistently improve large
language model (LLM) performance on a wide range of NLP tasks. However, prior work has …
language model (LLM) performance on a wide range of NLP tasks. However, prior work has …
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman… - arxiv preprint arxiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Evaluating the social impact of generative ai systems in systems and society
I Solaiman, Z Talat, W Agnew, L Ahmad… - arxiv preprint arxiv …, 2023 - arxiv.org
Generative AI systems across modalities, ranging from text (including code), image, audio,
and video, have broad social impacts, but there is no official standard for means of …
and video, have broad social impacts, but there is no official standard for means of …
Causal inference in natural language processing: Estimation, prediction, interpretation and beyond
A fundamental goal of scientific research is to learn about causal relationships. However,
despite its critical role in the life and social sciences, causality has not had the same …
despite its critical role in the life and social sciences, causality has not had the same …