A survey on data selection for language models
A major factor in the recent success of large language models is the use of enormous and
ever-growing text datasets for unsupervised pre-training. However, naively training a model …
ever-growing text datasets for unsupervised pre-training. However, naively training a model …
A survey of large language models
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
C-pack: Packed resources for general chinese embeddings
We introduce C-Pack, a package of resources that significantly advances the field of general
text embeddings for Chinese. C-Pack includes three critical resources. 1) C-MTP is a …
text embeddings for Chinese. C-Pack includes three critical resources. 1) C-MTP is a …
Phi-3 technical report: A highly capable language model locally on your phone
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion
tokens, whose overall performance, as measured by both academic benchmarks and …
tokens, whose overall performance, as measured by both academic benchmarks and …
Leveraging large language models for integrated satellite-aerial-terrestrial networks: recent advances and future directions
Integrated satellite, aerial, and terrestrial networks (ISATNs) represent a sophisticated
convergence of diverse communication technologies to ensure seamless connectivity …
convergence of diverse communication technologies to ensure seamless connectivity …
Rwkv: Reinventing rnns for the transformer era
Transformers have revolutionized almost all natural language processing (NLP) tasks but
suffer from memory and computational complexity that scales quadratically with sequence …
suffer from memory and computational complexity that scales quadratically with sequence …
Textbooks are all you need
We introduce phi-1, a new large language model for code, with significantly smaller size
than competing models: phi-1 is a Transformer-based model with 1.3 B parameters, trained …
than competing models: phi-1 is a Transformer-based model with 1.3 B parameters, trained …
Crosslingual generalization through multitask finetuning
Multitask prompted finetuning (MTF) has been shown to help large language models
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …
Aligning large language models with human: A survey
Large Language Models (LLMs) trained on extensive textual corpora have emerged as
leading solutions for a broad array of Natural Language Processing (NLP) tasks. Despite …
leading solutions for a broad array of Natural Language Processing (NLP) tasks. Despite …
Preference ranking optimization for human alignment
Large language models (LLMs) often contain misleading content, emphasizing the need to
align them with human values to ensure secure AI systems. Reinforcement learning from …
align them with human values to ensure secure AI systems. Reinforcement learning from …