A survey on dialogue summarization: Recent advances and new frontiers

X Feng, X Feng, B Qin - arxiv preprint arxiv:2107.03175, 2021 - arxiv.org
Dialogue summarization aims to condense the original dialogue into a shorter version
covering salient information, which is a crucial way to reduce dialogue data overload …

Abstractive text summarization: State of the art, challenges, and improvements

H Shakil, A Farooq, J Kalita - Neurocomputing, 2024 - Elsevier
Specifically focusing on the landscape of abstractive text summarization, as opposed to
extractive techniques, this survey presents a comprehensive overview, delving into state-of …

Zero-shot cross-lingual summarization via large language models

J Wang, Y Liang, F Meng, B Zou, Z Li, J Qu… - arxiv preprint arxiv …, 2023 - arxiv.org
Given a document in a source language, cross-lingual summarization (CLS) aims to
generate a summary in a different target language. Recently, the emergence of Large …

Cross-lingual knowledge editing in large language models

J Wang, Y Liang, Z Sun, Y Cao, J Xu… - arxiv preprint arxiv …, 2023 - arxiv.org
Knowledge editing aims to change language models' performance on several special cases
(ie, editing scope) by infusing the corresponding expected knowledge into them. With the …

Delving into parameter-efficient fine-tuning in code change learning: An empirical study

S Liu, J Keung, Z Yang, F Liu, Q Zhou… - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
Compared to Full-Model Fine-Tuning (FMFT), Parameter Efficient Fine-Tuning (PEFT) has
demonstrated superior performance and lower computational overhead in several code …

Clidsum: A benchmark dataset for cross-lingual dialogue summarization

J Wang, F Meng, Z Lu, D Zheng, Z Li, J Qu… - arxiv preprint arxiv …, 2022 - arxiv.org
We present ClidSum, a benchmark dataset for building cross-lingual summarization systems
on dialogue documents. It consists of 67k+ dialogue documents from two subsets (ie …

Multi-modal knowledge graph transformer framework for multi-modal entity alignment

Q Li, C Ji, S Guo, Z Liang, L Wang, J Li - arxiv preprint arxiv:2310.06365, 2023 - arxiv.org
Multi-Modal Entity Alignment (MMEA) is a critical task that aims to identify equivalent entity
pairs across multi-modal knowledge graphs (MMKGs). However, this task faces challenges …

Continual learning with semi-supervised contrastive distillation for incremental neural machine translation

Y Liang, F Meng, J Wang, J Xu, Y Chen… - Proceedings of the …, 2024 - aclanthology.org
Incrementally expanding the capability of an existing translation model to solve new domain
tasks over time is a fundamental and practical problem, which usually suffers from …

CrossSum: Beyond English-centric cross-lingual summarization for 1,500+ language pairs

A Bhattacharjee, T Hasan, WU Ahmad, YF Li… - arxiv preprint arxiv …, 2021 - arxiv.org
We present CrossSum, a large-scale cross-lingual summarization dataset comprising 1.68
million article-summary samples in 1,500+ language pairs. We create CrossSum by aligning …

Towards unifying multi-lingual and cross-lingual summarization

J Wang, F Meng, D Zheng, Y Liang, Z Li, J Qu… - arxiv preprint arxiv …, 2023 - arxiv.org
To adapt text summarization to the multilingual world, previous work proposes multi-lingual
summarization (MLS) and cross-lingual summarization (CLS). However, these two tasks …