Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Knowledge graphs
In this article, we provide a comprehensive introduction to knowledge graphs, which have
recently garnered significant attention from both industry and academia in scenarios that …
recently garnered significant attention from both industry and academia in scenarios that …
Scientometric review of artificial intelligence for operations & maintenance of wind turbines: The past, present and future
J Chatterjee, N Dethlefs - Renewable and Sustainable Energy Reviews, 2021 - Elsevier
Wind energy has emerged as a highly promising source of renewable energy in recent
times. However, wind turbines regularly suffer from operational inconsistencies, leading to …
times. However, wind turbines regularly suffer from operational inconsistencies, leading to …
Unifying large language models and knowledge graphs: A roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the
field of natural language processing and artificial intelligence, due to their emergent ability …
field of natural language processing and artificial intelligence, due to their emergent ability …
Holistic evaluation of language models
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
Finetuned language models are zero-shot learners
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …
language models. We show that instruction tuning--finetuning language models on a …
[PDF][PDF] Lora: Low-rank adaptation of large language models.
The dominant paradigm of natural language processing consists of large-scale pre-training
on general domain data and adaptation to particular tasks or domains. As we pre-train larger …
on general domain data and adaptation to particular tasks or domains. As we pre-train larger …
Dylora: Parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation
With the ever-growing size of pretrained models (PMs), fine-tuning them has become more
expensive and resource-hungry. As a remedy, low-rank adapters (LoRA) keep the main …
expensive and resource-hungry. As a remedy, low-rank adapters (LoRA) keep the main …
A survey of data augmentation approaches for NLP
Data augmentation has recently seen increased interest in NLP due to more work in low-
resource domains, new tasks, and the popularity of large-scale neural networks that require …
resource domains, new tasks, and the popularity of large-scale neural networks that require …
Prefix-tuning: Optimizing continuous prompts for generation
Fine-tuning is the de facto way to leverage large pretrained language models to perform
downstream tasks. However, it modifies all the language model parameters and therefore …
downstream tasks. However, it modifies all the language model parameters and therefore …
Openprompt: An open-source framework for prompt-learning
Prompt-learning has become a new paradigm in modern natural language processing,
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …