Knowledge graphs

A Hogan, E Blomqvist, M Cochez, C d'Amato… - ACM Computing …, 2021 - dl.acm.org
In this article, we provide a comprehensive introduction to knowledge graphs, which have
recently garnered significant attention from both industry and academia in scenarios that …

Scientometric review of artificial intelligence for operations & maintenance of wind turbines: The past, present and future

J Chatterjee, N Dethlefs - Renewable and Sustainable Energy Reviews, 2021 - Elsevier
Wind energy has emerged as a highly promising source of renewable energy in recent
times. However, wind turbines regularly suffer from operational inconsistencies, leading to …

Unifying large language models and knowledge graphs: A roadmap

S Pan, L Luo, Y Wang, C Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the
field of natural language processing and artificial intelligence, due to their emergent ability …

Holistic evaluation of language models

P Liang, R Bommasani, T Lee, D Tsipras… - arxiv preprint arxiv …, 2022 - arxiv.org
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …

Finetuned language models are zero-shot learners

J Wei, M Bosma, VY Zhao, K Guu, AW Yu… - arxiv preprint arxiv …, 2021 - arxiv.org
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a …

[PDF][PDF] Lora: Low-rank adaptation of large language models.

EJ Hu, Y Shen, P Wallis, Z Allen-Zhu, Y Li, S Wang… - ICLR, 2022 - arxiv.org
The dominant paradigm of natural language processing consists of large-scale pre-training
on general domain data and adaptation to particular tasks or domains. As we pre-train larger …

Dylora: Parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation

M Valipour, M Rezagholizadeh, I Kobyzev… - arxiv preprint arxiv …, 2022 - arxiv.org
With the ever-growing size of pretrained models (PMs), fine-tuning them has become more
expensive and resource-hungry. As a remedy, low-rank adapters (LoRA) keep the main …

A survey of data augmentation approaches for NLP

SY Feng, V Gangal, J Wei, S Chandar… - arxiv preprint arxiv …, 2021 - arxiv.org
Data augmentation has recently seen increased interest in NLP due to more work in low-
resource domains, new tasks, and the popularity of large-scale neural networks that require …

Prefix-tuning: Optimizing continuous prompts for generation

XL Li, P Liang - arxiv preprint arxiv:2101.00190, 2021 - arxiv.org
Fine-tuning is the de facto way to leverage large pretrained language models to perform
downstream tasks. However, it modifies all the language model parameters and therefore …

Openprompt: An open-source framework for prompt-learning

N Ding, S Hu, W Zhao, Y Chen, Z Liu, HT Zheng… - arxiv preprint arxiv …, 2021 - arxiv.org
Prompt-learning has become a new paradigm in modern natural language processing,
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …