[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4
KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …
C-pack: Packed resources for general chinese embeddings
We introduce C-Pack, a package of resources that significantly advances the field of general
text embeddings for Chinese. C-Pack includes three critical resources. 1) C-MTP is a …
text embeddings for Chinese. C-Pack includes three critical resources. 1) C-MTP is a …
Large language models for information retrieval: A survey
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …
search engines, have integrated themselves into our daily lives. These systems also serve …
Evaluating large language models at evaluating instruction following
As research in large language models (LLMs) continues to accelerate, LLM-based
evaluation has emerged as a scalable and cost-effective alternative to human evaluations …
evaluation has emerged as a scalable and cost-effective alternative to human evaluations …
Improving text embeddings with large language models
In this paper, we introduce a novel and simple method for obtaining high-quality text
embeddings using only synthetic data and less than 1k training steps. Unlike existing …
embeddings using only synthetic data and less than 1k training steps. Unlike existing …
Replug: Retrieval-augmented black-box language models
We introduce REPLUG, a retrieval-augmented language modeling framework that treats the
language model (LM) as a black box and augments it with a tuneable retrieval model. Unlike …
language model (LM) as a black box and augments it with a tuneable retrieval model. Unlike …
Exploring the benefits of training expert language models over instruction tuning
Abstract Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known
as multitask-prompted fine-tuning (MT), have shown capabilities to generalize to unseen …
as multitask-prompted fine-tuning (MT), have shown capabilities to generalize to unseen …
Representation learning with large language models for recommendation
Recommender systems have seen significant advancements with the influence of deep
learning and graph neural networks, particularly in capturing complex user-item …
learning and graph neural networks, particularly in capturing complex user-item …
Augmenting interpretable models with large language models during training
Recent large language models (LLMs), such as ChatGPT, have demonstrated remarkable
prediction performance for a growing array of tasks. However, their proliferation into high …
prediction performance for a growing array of tasks. However, their proliferation into high …
Uniir: Training and benchmarking universal multimodal information retrievers
Existing information retrieval (IR) models often assume a homogeneous format, limiting their
applicability to diverse user needs, such as searching for images with text descriptions …
applicability to diverse user needs, such as searching for images with text descriptions …