Logiqa 2.0—an improved dataset for logical reasoning in natural language understanding

H Liu, J Liu, L Cui, Z Teng, N Duan… - … on Audio, Speech …, 2023 - ieeexplore.ieee.org
NLP research on logical reasoning regains momentum with the recent releases of a handful
of datasets, notably LogiQA and Reclor. Logical reasoning is exploited in many probing …

Pre-trained language-meaning models for multilingual parsing and generation

C Wang, H Lai, M Nissim, J Bos - arxiv preprint arxiv:2306.00124, 2023 - arxiv.org
Pre-trained language models (PLMs) have achieved great success in NLP and have
recently been used for tasks in computational semantics. However, these tasks do not fully …

Gaining more insight into neural semantic parsing with challenging benchmarks

X Zhang, C Wang, R van Noord… - Proceedings of the Fifth …, 2024 - aclanthology.org
Abstract The Parallel Meaning Bank (PMB) serves as a corpus for semantic processing with
a focus on semantic parsing and text generation. Currently, we witness an excellent …

PMB5: Gaining More Insight into Neural Semantic Parsing with Challenging Benchmarks

X Zhang, C Wang, R van Noord, J Bos - arxiv preprint arxiv:2404.08354, 2024 - arxiv.org
The Parallel Meaning Bank (PMB) serves as a corpus for semantic processing with a focus
on semantic parsing and text generation. Currently, we witness an excellent performance of …

Data Augmentation for Low-Resource Italian NLP: Enhancing Semantic Processing with DRS

MS Amin, L Anselma, A Mazzei - CEUR WORKSHOP PROCEEDINGS, 2024 - iris.unito.it
Discourse Representation Structure (DRS), a formal meaning representation, has shown
promising results in semantic parsing and natural language generation tasks for high …

[PDF][PDF] Language-neutral Semantic Parsing using Graph Transformations on Universal Dependencies

W Poelman - 2022 - wesselpoelman.nl
Current trends in semantic parsing primarily use large, pre-trained neural language models.
These models achieve impressive scores but also present some drawbacks. They require …

[PDF][PDF] Neural Text Rewriting

H Lai - research.rug.nl
E&M domain Bi-LSTM-91.9 23.1 88.6 36.6-48.9 39.2 78.9 52.4-42.0 40.3 80.4 53.7 Bi-
LSTM+ SC-90.2 23.8 89.3 37.6-50.0 38.6 82.1 52.6-45.1 39.9 78.9 53.0 Bi-LSTM+ BLEU …