Discrete opinion tree induction for aspect-based sentiment analysis

C Chen, Z Teng, Z Wang, Y Zhang - … of the 60th Annual Meeting of …, 2022 - aclanthology.org
Dependency trees have been intensively used with graph neural networks for aspect-based
sentiment classification. Though being effective, such methods rely on external dependency …

What do they capture? a structural analysis of pre-trained language models for source code

Y Wan, W Zhao, H Zhang, Y Sui, G Xu… - Proceedings of the 44th …, 2022 - dl.acm.org
Recently, many pre-trained language models for source code have been proposed to model
the context of code and serve as a basis for downstream code intelligence tasks such as …

Head-driven phrase structure grammar parsing on Penn treebank

J Zhou, H Zhao - arxiv preprint arxiv:1907.02684, 2019 - arxiv.org
Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism representing rich
contextual syntactic and even semantic meanings. This paper makes the first attempt to …

A survey of syntactic-semantic parsing based on constituent and dependency structures

MS Zhang - Science China Technological Sciences, 2020 - Springer
Syntactic and semantic parsing has been investigated for decades, which is one primary
topic in the natural language processing community. This article aims for a brief survey on …

Unveiling code pre-trained models: Investigating syntax and semantics capacities

W Ma, S Liu, M Zhao, X **e, W Wang, Q Hu… - ACM Transactions on …, 2024 - dl.acm.org
Code models have made significant advancements in code intelligence by encoding
knowledge about programming languages. While previous studies have explored the …

Ordered gnn: Ordering message passing to deal with heterophily and over-smoothing

Y Song, C Zhou, X Wang, Z Lin - arxiv preprint arxiv:2302.01524, 2023 - arxiv.org
Most graph neural networks follow the message passing mechanism. However, it faces the
over-smoothing problem when multiple times of message passing is applied to a graph …

Are pre-trained language models aware of phrases? simple but strong baselines for grammar induction

T Kim, J Choi, D Edmiston, S Lee - arxiv preprint arxiv:2002.00737, 2020 - arxiv.org
With the recent success and popularity of pre-trained language models (LMs) in natural
language processing, there has been a rise in efforts to understand their inner workings. In …

Rethinking self-attention: Towards interpretability in neural parsing

K Mrini, F Dernoncourt, Q Tran, T Bui, W Chang… - arxiv preprint arxiv …, 2019 - arxiv.org
Attention mechanisms have improved the performance of NLP tasks while allowing models
to remain explainable. Self-attention is currently widely used, however interpretability is …

Fast and accurate neural CRF constituency parsing

Y Zhang, H Zhou, Z Li - arxiv preprint arxiv:2008.03736, 2020 - arxiv.org
Estimating probability distribution is one of the core issues in the NLP field. However, in both
deep learning (DL) and pre-DL eras, unlike the vast applications of linear-chain CRF in …

Bottom-up constituency parsing and nested named entity recognition with pointer networks

S Yang, K Tu - arxiv preprint arxiv:2110.05419, 2021 - arxiv.org
Constituency parsing and nested named entity recognition (NER) are similar tasks since
they both aim to predict a collection of nested and non-crossing spans. In this work, we cast …