In-Context learning in large language models: A neuroscience-inspired analysis of representations
S Yousefi, H Hasanbeig, LM Betthauser, A Saran… - 2023 - openreview.net
Large language models (LLMs) exhibit remarkable performance improvement through in-
context learning (ICL) by leveraging task-specific examples in the input. However, the …
context learning (ICL) by leveraging task-specific examples in the input. However, the …
Investigating Efficacy of Perplexity in Detecting LLM-Generated Code
Large language model-generated code (LLMgCode) has become increasingly prevalent in
software development. Many studies report that LLMgCode has more quality and security …
software development. Many studies report that LLMgCode has more quality and security …
On the Limitations of Embedding Based Methods for Measuring Functional Correctness for Code Generation
A Naik - arxiv preprint arxiv:2405.01580, 2024 - arxiv.org
The task of code generation from natural language (NL2Code) has become extremely
popular, especially with the advent of Large Language Models (LLMs). However, efforts to …
popular, especially with the advent of Large Language Models (LLMs). However, efforts to …
[BOOK][B] Discovering and Applying Geometric Properties for Probing Contextualized Representations
Y Zhou - 2023 - search.proquest.com
This dissertation studies the use of geometric properties to justify the impressive
performance of contextualized representations on various natural language processing …
performance of contextualized representations on various natural language processing …
ObscuraCoder: Powering Efficient Code LM Pre-Training Via Obfuscation Grounding
PRETVIAO GROUNDING - openreview.net
Language models (LMs) have become a staple of the code-writing toolbox. Their pre-
training recipe has, however, remained stagnant over recent years, barring the occasional …
training recipe has, however, remained stagnant over recent years, barring the occasional …