A survey on text-to-sql parsing: Concepts, methods, and future directions

B Qin, B Hui, L Wang, M Yang, J Li, B Li… - arxiv preprint arxiv …, 2022 - arxiv.org
Text-to-SQL parsing is an essential and challenging task. The goal of text-to-SQL parsing is
to convert a natural language (NL) question to its corresponding structured query language …

Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls

J Li, B Hui, G Qu, J Yang, B Li, B Li… - Advances in …, 2023 - proceedings.neurips.cc
Text-to-SQL parsing, which aims at converting natural language instructions into executable
SQLs, has gained increasing attention in recent years. In particular, GPT-4 and Claude-2 …

Least-to-most prompting enables complex reasoning in large language models

D Zhou, N Schärli, L Hou, J Wei, N Scales… - arxiv preprint arxiv …, 2022 - arxiv.org
Chain-of-thought prompting has demonstrated remarkable performance on various natural
language reasoning tasks. However, it tends to perform poorly on tasks which requires …

Resdsql: Decoupling schema linking and skeleton parsing for text-to-sql

H Li, J Zhang, C Li, H Chen - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
One of the recent best attempts at Text-to-SQL is the pre-trained language model. Due to the
structural property of the SQL queries, the seq2seq model takes the responsibility of parsing …

Compositional exemplars for in-context learning

J Ye, Z Wu, J Feng, T Yu… - … Conference on Machine …, 2023 - proceedings.mlr.press
Large pretrained language models (LMs) have shown impressive In-Context Learning (ICL)
ability, where the model learns to do an unseen task simply by conditioning on a prompt …

Grammar prompting for domain-specific language generation with large language models

B Wang, Z Wang, X Wang, Y Cao… - Advances in Neural …, 2023 - proceedings.neurips.cc
Large language models (LLMs) can learn to perform a wide range of natural language tasks
from just a handful of in-context examples. However, for generating strings from highly …

PICARD: Parsing incrementally for constrained auto-regressive decoding from language models

T Scholak, N Schucher, D Bahdanau - arxiv preprint arxiv:2109.05093, 2021 - arxiv.org
Large pre-trained language models for textual data have an unconstrained output space; at
each decoding step, they can produce any of 10,000 s of sub-word tokens. When fine-tuned …

Graphix-t5: Mixing pre-trained transformers with graph-aware layers for text-to-sql parsing

J Li, B Hui, R Cheng, B Qin, C Ma, N Huo… - Proceedings of the …, 2023 - ojs.aaai.org
The task of text-to-SQL parsing, which aims at converting natural language questions into
executable SQL queries, has garnered increasing attention in recent years. One of the major …

Codes: Towards building open-source language models for text-to-sql

H Li, J Zhang, H Liu, J Fan, X Zhang, J Zhu… - Proceedings of the …, 2024 - dl.acm.org
Language models have shown promising performance on the task of translating natural
language questions into SQL queries (Text-to-SQL). However, most of the state-of-the-art …

Compositional semantic parsing with large language models

A Drozdov, N Schärli, E Akyürek, N Scales… - arxiv preprint arxiv …, 2022 - arxiv.org
Humans can reason compositionally when presented with new tasks. Previous research
shows that appropriate prompting techniques enable large language models (LLMs) to …