Natural language processing for smart healthcare

B Zhou, G Yang, Z Shi, S Ma - IEEE Reviews in Biomedical …, 2022 - ieeexplore.ieee.org
Smart healthcare has achieved significant progress in recent years. Emerging artificial
intelligence (AI) technologies enable various smart applications across various healthcare …

Deep learning for intelligent human–computer interaction

Z Lv, F Poiesi, Q Dong, J Lloret, H Song - Applied Sciences, 2022 - mdpi.com
In recent years, gesture recognition and speech recognition, as important input methods in
Human–Computer Interaction (HCI), have been widely used in the field of virtual reality. In …

Reasoning with language model prompting: A survey

S Qiao, Y Ou, N Zhang, X Chen, Y Yao, S Deng… - arxiv preprint arxiv …, 2022 - arxiv.org
Reasoning, as an essential ability for complex problem-solving, can provide back-end
support for various real-world applications, such as medical diagnosis, negotiation, etc. This …

Causal machine learning: A survey and open problems

J Kaddour, A Lynch, Q Liu, MJ Kusner… - arxiv preprint arxiv …, 2022 - arxiv.org
Causal Machine Learning (CausalML) is an umbrella term for machine learning methods
that formalize the data-generation process as a structural causal model (SCM). This …

Text and patterns: For effective chain of thought, it takes two to tango

A Madaan, A Yazdanbakhsh - arxiv preprint arxiv:2209.07686, 2022 - arxiv.org
The past decade has witnessed dramatic gains in natural language processing and an
unprecedented scaling of large language models. These developments have been …

Towards faithful model explanation in nlp: A survey

Q Lyu, M Apidianaki, C Callison-Burch - Computational Linguistics, 2024 - direct.mit.edu
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to
understand. This has given rise to numerous efforts towards model explainability in recent …

Causal parrots: Large language models may talk causality but are not causal

M Zečević, M Willig, DS Dhami, K Kersting - arxiv preprint arxiv …, 2023 - arxiv.org
Some argue scale is all what is needed to achieve AI, covering even causal models. We
make it clear that large language models (LLMs) cannot be causal and give reason onto …

CIRS: Bursting filter bubbles by counterfactual interactive recommender system

C Gao, S Wang, S Li, J Chen, X He, W Lei, B Li… - ACM Transactions on …, 2023 - dl.acm.org
While personalization increases the utility of recommender systems, it also brings the issue
of filter bubbles. eg, if the system keeps exposing and recommending the items that the user …

DISCO: Distilling counterfactuals with large language models

Z Chen, Q Gao, A Bosselut, A Sabharwal… - arxiv preprint arxiv …, 2022 - arxiv.org
Models trained with counterfactually augmented data learn representations of the causal
structure of tasks, enabling robust generalization. However, high-quality counterfactual data …

Coco-counterfactuals: Automatically constructed counterfactual examples for image-text pairs

T Le, V Lal, P Howard - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Counterfactual examples have proven to be valuable in the field of natural language
processing (NLP) for both evaluating and improving the robustness of language models to …