Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness

O Friha, MA Ferrag, B Kantarci… - IEEE Open Journal …, 2024 - ieeexplore.ieee.org
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …

Knowledge distillation on graphs: A survey

Y Tian, S Pei, X Zhang, C Zhang, N Chawla - ACM Computing Surveys, 2023 - dl.acm.org
Graph Neural Networks (GNNs) have received significant attention for demonstrating their
capability to handle graph data. However, they are difficult to be deployed in resource …

Promptmm: Multi-modal knowledge distillation for recommendation with prompt-tuning

W Wei, J Tang, L **a, Y Jiang, C Huang - Proceedings of the ACM on …, 2024 - dl.acm.org
Multimedia online platforms (eg, Amazon, TikTok) have greatly benefited from the
incorporation of multimedia (eg, visual, textual, and acoustic) content into their personal …

Enhancing federated semi-supervised learning with out-of-distribution filtering amidst class mismatches

J **, F Ni, S Dai, K Li, B Hong - Journal of Computer Technology …, 2024 - suaspress.org
Federated Learning (FL) has gained prominence as a method for training models on edge
computing devices, enabling the preservation of data privacy by eliminating the need to …

Transformers provably solve parity efficiently with chain of thought

J Kim, T Suzuki - arxiv preprint arxiv:2410.08633, 2024 - arxiv.org
This work provides the first theoretical analysis of training transformers to solve complex
problems by recursively generating intermediate states, analogous to fine-tuning for chain-of …

Can we soft prompt LLMs for graph learning tasks?

Z Liu, X He, Y Tian, NV Chawla - … Proceedings of the ACM on Web …, 2024 - dl.acm.org
Graph plays an important role in representing complex relationships in real-world
applications such as social networks, biological data and citation networks. In recent years …

LLaVA-KD: A Framework of Distilling Multimodal Large Language Models

Y Cai, J Zhang, H He, X He, A Tong, Z Gan… - arxiv preprint arxiv …, 2024 - arxiv.org
The success of Large Language Models (LLM) has led researchers to explore Multimodal
Large Language Models (MLLM) for unified visual and linguistic understanding. However …

Neuro-Inspired Information-Theoretic Hierarchical Perception for Multimodal Learning

X **ao, G Liu, G Gupta, D Cao, S Li, Y Li, T Fang… - arxiv preprint arxiv …, 2024 - arxiv.org
Integrating and processing information from various sources or modalities are critical for
obtaining a comprehensive and accurate perception of the real world in autonomous …

MinPrompt: Graph-based minimal prompt data augmentation for few-shot question answering

X Chen, JY Jiang, WC Chang, CJ Hsieh, HF Yu… - arxiv preprint arxiv …, 2023 - arxiv.org
Few-shot question answering (QA) aims at achieving satisfactory results on machine
question answering when only a few training samples are available. Recent advances …

Weighted-Reward Preference Optimization for Implicit Model Fusion

Z Yang, F Wan, L Zhong, T Shi, X Quan - arxiv preprint arxiv:2412.03187, 2024 - arxiv.org
While fusing heterogeneous open-source LLMs with varying architectures and sizes can
potentially integrate the strengths of different models, existing fusion methods face …