Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Survey on factuality in large language models: Knowledge, retrieval and domain-specificity
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …
A comprehensive review of synthetic data generation in smart farming by using variational autoencoder and generative adversarial network
In this study, we propose the use of Variational Autoencoders (VAEs) and Generative
Adversarial Networks (GANs) to generate synthetic data for crop recommendation (CR). CR …
Adversarial Networks (GANs) to generate synthetic data for crop recommendation (CR). CR …
Glm-130b: An open bilingual pre-trained model
We introduce GLM-130B, a bilingual (English and Chinese) pre-trained language model
with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as …
with 130 billion parameters. It is an attempt to open-source a 100B-scale model at least as …
Retrieval-augmented generation for ai-generated content: A survey
The development of Artificial Intelligence Generated Content (AIGC) has been facilitated by
advancements in model algorithms, scalable foundation model architectures, and the …
advancements in model algorithms, scalable foundation model architectures, and the …
Deep bidirectional language-knowledge graph pretraining
Pretraining a language model (LM) on text has been shown to help various downstream
NLP tasks. Recent works show that a knowledge graph (KG) can complement text data …
NLP tasks. Recent works show that a knowledge graph (KG) can complement text data …
On the opportunities and challenges of foundation models for geospatial artificial intelligence
Large pre-trained models, also known as foundation models (FMs), are trained in a task-
agnostic manner on large-scale data and can be adapted to a wide range of downstream …
agnostic manner on large-scale data and can be adapted to a wide range of downstream …
Can knowledge graphs reduce hallucinations in llms?: A survey
The contemporary LLMs are prone to producing hallucinations, stemming mainly from the
knowledge gaps within the models. To address this critical limitation, researchers employ …
knowledge gaps within the models. To address this critical limitation, researchers employ …
Retrieval-augmented multimodal language modeling
Recent multimodal models such as DALL-E and CM3 have achieved remarkable progress
in text-to-image and image-to-text generation. However, these models store all learned …
in text-to-image and image-to-text generation. However, these models store all learned …
Large language models and knowledge graphs: Opportunities and challenges
Large Language Models (LLMs) have taken Knowledge Representation--and the world--by
storm. This inflection point marks a shift from explicit knowledge representation to a renewed …
storm. This inflection point marks a shift from explicit knowledge representation to a renewed …
Lift: Language-interfaced fine-tuning for non-language machine learning tasks
Fine-tuning pretrained language models (LMs) without making any architectural changes
has become a norm for learning various language downstream tasks. However, for non …
has become a norm for learning various language downstream tasks. However, for non …