Analyzing and Adapting Large Language Models for Few-Shot Multilingual NLU: Are We There Yet?
Supervised fine-tuning (SFT), supervised instruction tuning (SIT) and in-context learning
(ICL) are three alternative, de facto standard approaches to few-shot learning. ICL has …
(ICL) are three alternative, de facto standard approaches to few-shot learning. ICL has …
Revisiting non-English text simplification: A unified multilingual benchmark
Recent advancements in high-quality, large-scale English resources have pushed the
frontier of English Automatic Text Simplification (ATS) research. However, less work has …
frontier of English Automatic Text Simplification (ATS) research. However, less work has …
Quantifying the dialect gap and its correlates across languages
Historically, researchers and consumers have noticed a decrease in quality when applying
NLP tools to minority variants of languages (ie Puerto Rican Spanish or Swiss German), but …
NLP tools to minority variants of languages (ie Puerto Rican Spanish or Swiss German), but …
Sira: Sparse mixture of low rank adaptation
Parameter Efficient Tuning has been an prominent approach to adapt the Large Language
Model to downstream tasks. Most previous works considers adding the dense trainable …
Model to downstream tasks. Most previous works considers adding the dense trainable …
Optimal Transport Posterior Alignment for Cross-lingual Semantic Parsing
Cross-lingual semantic parsing transfers parsing capability from a high-resource language
(eg, English) to low-resource languages with scarce training data. Previous work has …
(eg, English) to low-resource languages with scarce training data. Previous work has …
DIALECTBENCH: A NLP Benchmark for Dialects, Varieties, and Closely-Related Languages
Language technologies should be judged on their usefulness in real-world use cases. An
often overlooked aspect in natural language processing (NLP) research and evaluation is …
often overlooked aspect in natural language processing (NLP) research and evaluation is …
Sqatin: Supervised instruction tuning meets question answering for improved dialogue nlu
Task-oriented dialogue (ToD) systems help users execute well-defined tasks across a
variety of domains (eg, $\textit {flight booking} $ or $\textit {food ordering} $), with their …
variety of domains (eg, $\textit {flight booking} $ or $\textit {food ordering} $), with their …
Survey on publicly available sinhala natural language processing tools and research
N De Silva - arxiv preprint arxiv:1906.02358, 2019 - arxiv.org
Sinhala is the native language of the Sinhalese people who make up the largest ethnic
group of Sri Lanka. The language belongs to the globe-spanning language tree, Indo …
group of Sri Lanka. The language belongs to the globe-spanning language tree, Indo …
A Systematic Study of Performance Disparities in Multilingual Task-Oriented Dialogue Systems
Achieving robust language technologies that can perform well across the world's many
languages is a central goal of multilingual NLP. In this work, we take stock of and empirically …
languages is a central goal of multilingual NLP. In this work, we take stock of and empirically …
CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models
Z Gong, H Yu, C Liao, B Liu, C Chen, J Li - arxiv preprint arxiv …, 2024 - arxiv.org
Multi-task learning (MTL) benefits the fine-tuning of large language models (LLMs) by
providing a single model with improved performance and generalization ability across tasks …
providing a single model with improved performance and generalization ability across tasks …