[HTML][HTML] A survey of GPT-3 family large language models including ChatGPT and GPT-4

KS Kalyan - Natural Language Processing Journal, 2024 - Elsevier
Large language models (LLMs) are a special class of pretrained language models (PLMs)
obtained by scaling model size, pretraining corpus and computation. LLMs, because of their …

[PDF][PDF] A survey of large language models

WX Zhao, K Zhou, J Li, T Tang… - ar** a novel ransomware detection methodology
leveraging the capabilities of the open source large language model LLaMA-7b and image …

Large language models meet nlp: A survey

L Qin, Q Chen, X Feng, Y Wu, Y Zhang, Y Li… - arxiv preprint arxiv …, 2024 - arxiv.org
While large language models (LLMs) like ChatGPT have shown impressive capabilities in
Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this …

MCoT: A Novel Benchmark for Multi-Domain Multi-step Multi-modal Chain-of-Thought

Q Chen, L Qin, J Zhang, Z Chen, X Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-modal Chain-of-Thought (MCoT) requires models to leverage knowledge from both
textual and visual modalities for step-by-step reasoning, which gains increasing attention …

[HTML][HTML] Assessing the quality of automatic-generated short answers using GPT-4

L Rodrigues, FD Pereira, L Cabral, D Gašević… - … and Education: Artificial …, 2024 - Elsevier
Open-ended assessments play a pivotal role in enabling instructors to evaluate student
knowledge acquisition and provide constructive feedback. Integrating large language …

A comprehensive evaluation of quantization strategies for large language models

R **, J Du, W Huang, W Liu, J Luan… - Findings of the …, 2024 - aclanthology.org
Increasing the number of parameters in large language models (LLMs) usually improves
performance in downstream tasks but raises compute and memory costs, making …

Enhancing inference accuracy of llama llm using reversely computed dynamic temporary weights

Q **n, Q Nan - Authorea Preprints, 2024 - techrxiv.org
Reversely computed dynamic temporary weights introduce a novel and significant
enhancement to the adaptability and accuracy of large language models. By dynamically …

Cause and effect: can large language models truly understand causality?

S Ashwani, K Hegde, NR Mannuru… - Proceedings of the …, 2024 - ojs.aaai.org
With the rise of Large Language Models (LLMs), it has become crucial to understand their
capabilities and limitations in deciphering and explaining the complex web of causal …