Qwen2. 5 technical report

A Yang, B Yang, B Zhang, B Hui, B Zheng, B Yu… - arxiv preprint arxiv …, 2024 - arxiv.org
In this report, we introduce Qwen2. 5, a comprehensive series of large language models
(LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has …

Towards scalable automated alignment of llms: A survey

B Cao, K Lu, X Lu, J Chen, M Ren, H **ang… - arxiv preprint arxiv …, 2024 - arxiv.org
Alignment is the most critical step in building large language models (LLMs) that meet
human needs. With the rapid development of LLMs gradually surpassing human …

Wordflow: Social prompt engineering for large language models

ZJ Wang, A Chakravarthy, D Munechika… - arxiv preprint arxiv …, 2024 - arxiv.org
Large language models (LLMs) require well-crafted prompts for effective use. Prompt
engineering, the process of designing prompts, is challenging, particularly for non-experts …

Synthetic continued pretraining

Z Yang, N Band, S Li, E Candes… - arxiv preprint arxiv …, 2024 - arxiv.org
Pretraining on large-scale, unstructured internet text enables language models to acquire a
significant amount of world knowledge. However, this knowledge acquisition is data …

MeMemo: On-device Retrieval Augmentation for Private and Personalized Text Generation

ZJ Wang, DH Chau - Proceedings of the 47th International ACM SIGIR …, 2024 - dl.acm.org
Retrieval-augmented text generation (RAG) addresses the common limitations of large
language models (LLMs), such as hallucination, by retrieving information from an updatable …

[КНИГА][B] The Blue Behemoth

L Brackett - 2011 - books.google.com
A huge amount of money for a simple job. The deal seemed too good to be true... Excerpt
Bucky Shannon leaned forward across the little hexagonal table. He knocked over the …