Validation guidelines for drug-target prediction methods

Z Tanoli, A Schulman, T Aittokallio - Expert opinion on drug …, 2025 - Taylor & Francis
Introduction Map** the interactions between pharmaceutical compounds and their
molecular targets is a fundamental aspect of drug discovery and repurposing. Drug-target …

scReader: Prompting Large Language Models to Interpret scRNA-seq Data

C Li, Q Long, Y Zhou, M **ao - arxiv preprint arxiv:2412.18156, 2024 - arxiv.org
Large language models (LLMs) have demonstrated remarkable advancements, primarily
due to their capabilities in modeling the hidden relationships within text sequences. This …

Towards Graph Prompt Learning: A Survey and Beyond

Q Long, Y Yan, P Zhang, C Fang, W Cui, Z Ning… - arxiv preprint arxiv …, 2024 - arxiv.org
Large-scale" pre-train and prompt learning" paradigms have demonstrated remarkable
adaptability, enabling broad applications across diverse domains such as question …

GeneSum: Large Language Model-based Gene Summary Extraction

Z Chen, C Hu, M Wu, Q Long, X Wang… - 2024 IEEE …, 2024 - ieeexplore.ieee.org
Emerging topics in biomedical research are continuously expanding, providing a wealth of
information about genes and their function. This rapid proliferation of knowledge presents …

Knowledge Hierarchy Guided Biological-Medical Dataset Distillation for Domain LLM Training

X Cai, C Wang, Q Long, Y Zhou, M **ao - arxiv preprint arxiv:2501.15108, 2025 - arxiv.org
The rapid advancement of large language models (LLMs) in biological-medical applications
has highlighted a gap between their potential and the limited scale and often low quality of …

Embodied AI-guided interactive digital teachers for education

Z Zhao, Z Yin, J Sun, P Hui - SIGGRAPH Asia 2024 Educator's Forum, 2024 - dl.acm.org
Traditional education is considered incapable of providing prompt feedback, facilitating
proactive learning, and giving indiscriminate responses. This has been observed in both in …

Task-KV: Task-aware KV Cache Optimization via Semantic Differentiation of Attention Heads

X He, J Liu, S Chen - arxiv preprint arxiv:2501.15113, 2025 - arxiv.org
KV cache is a widely used acceleration technique for large language models (LLMs)
inference. However, its memory requirement grows rapidly with input length. Previous …

Empowering LLMs with Toolkits: An Open-Source Intelligence Acquisition Method

X Yuan, J Wang, H Zhao, Y Tian, F Qi - Future Internet, 2024 - search.proquest.com
The acquisition of cybersecurity threat intelligence is a critical task in the implementation of
effective security defense strategies. Recently, advancements in large language model …

GuidelineGuard: An Agentic Framework for Medical Note Evaluation with Guideline Adherence

MD Shahriyear - arxiv preprint arxiv:2411.06264, 2024 - arxiv.org
Although rapid advancements in Large Language Models (LLMs) are facilitating the
integration of artificial intelligence-based applications and services in healthcare, limited …

[PDF][PDF] Large Language Models in Biomedicine and Healthcare

J Zhou, H Li, S Chen, Z Han, X Gao - Caduceus - researchgate.net
Large Language Models (LLMs) have revolutionized various fields, and their applications in
biomedicine and healthcare have shown transformative potential. These models, trained on …