[HTML][HTML] Large language models for wearable sensor-based human activity recognition, health monitoring, and behavioral modeling: a survey of early trends, datasets …

E Ferrara - Sensors, 2024‏ - mdpi.com
The proliferation of wearable technology enables the generation of vast amounts of sensor
data, offering significant opportunities for advancements in health monitoring, activity …

Can large language models be good emotional supporter? mitigating preference bias on emotional support conversation

D Kang, S Kim, T Kwon, S Moon, H Cho, Y Yu… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Emotional Support Conversation (ESC) is a task aimed at alleviating individuals' emotional
distress through daily conversation. Given its inherent complexity and non-intuitive nature …

[HTML][HTML] Harnessing large language models over transformer models for detecting Bengali depressive social media text: A comprehensive study

AK Chowdhury, SR Sujon, MSS Shafi… - Natural Language …, 2024‏ - Elsevier
In an era where the silent struggle of underdiagnosed depression pervades globally, our
research delves into the crucial link between mental health and social media. This work …

Limitations of the LLM-as-a-Judge approach for evaluating LLM outputs in expert knowledge tasks

A Szymanski, N Ziems, HA Eicher-Miller, TJJ Li… - arxiv preprint arxiv …, 2024‏ - arxiv.org
The potential of using Large Language Models (LLMs) themselves to evaluate LLM outputs
offers a promising method for assessing model performance across various contexts …

AdaCLF: An Adaptive Curriculum Learning Framework for Emotional Support Conversation

G Tu, T Niu, R Xu, B Liang… - IEEE Intelligent …, 2024‏ - ieeexplore.ieee.org
Emotional support conversation (ESC) aims to alleviate emotional distress using data-driven
approaches trained on human-generated responses. However, the subjective and open …

Understanding the relationship between prompts and response uncertainty in large language models

ZY Zhang, A Verma, F Doshi-Velez… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Large language models (LLMs) are widely used in decision-making, but their reliability,
especially in critical tasks like healthcare, is not well-established. Therefore, understanding …

A client–server based recognition system: Non-contact single/multiple emotional and behavioral state assessment methods

X Zhu, Z Liu, E Cambria, X Yu, X Fan, H Chen… - Computer Methods and …, 2025‏ - Elsevier
Abstract Background and Objectives: In the current global health landscape, there is an
increasing demand for rapid and accurate assessment of mental states. Traditional …

Zero-shot explainable mental health analysis on social media by incorporating mental scales

W Li, Y Zhu, X Lin, M Li, Z Jiang, Z Zeng - Companion Proceedings of the …, 2024‏ - dl.acm.org
Traditional discriminative approaches in mental health analysis are known for their strong
capacity but lack interpretability and demand large-scale annotated data. The generative …

Explainable AI for Stress and Depression Detection in the Cyberspace and Beyond

E Cambria, B Gulyás, JS Pang, NV Marsh… - Pacific-Asia Conference …, 2024‏ - Springer
Stress and depression have emerged as prevalent challenges in contemporary society,
deeply intertwined with the complexities of modern life. This paper delves into the …

Can Large Language Models Identify Implicit Suicidal Ideation? An Empirical Evaluation

T Li, S Yang, J Wu, J Wei, L Hu, M Li, DF Wong… - arxiv preprint arxiv …, 2025‏ - arxiv.org
We present a comprehensive evaluation framework for assessing Large Language
Models'(LLMs) capabilities in suicide prevention, focusing on two critical aspects: the …