Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …
Knowledge distillation on graphs: A survey
Graph Neural Networks (GNNs) have received significant attention for demonstrating their
capability to handle graph data. However, they are difficult to be deployed in resource …
capability to handle graph data. However, they are difficult to be deployed in resource …
Promptmm: Multi-modal knowledge distillation for recommendation with prompt-tuning
Multimedia online platforms (eg, Amazon, TikTok) have greatly benefited from the
incorporation of multimedia (eg, visual, textual, and acoustic) content into their personal …
incorporation of multimedia (eg, visual, textual, and acoustic) content into their personal …
Enhancing federated semi-supervised learning with out-of-distribution filtering amidst class mismatches
Federated Learning (FL) has gained prominence as a method for training models on edge
computing devices, enabling the preservation of data privacy by eliminating the need to …
computing devices, enabling the preservation of data privacy by eliminating the need to …
Transformers provably solve parity efficiently with chain of thought
This work provides the first theoretical analysis of training transformers to solve complex
problems by recursively generating intermediate states, analogous to fine-tuning for chain-of …
problems by recursively generating intermediate states, analogous to fine-tuning for chain-of …
Can we soft prompt LLMs for graph learning tasks?
Graph plays an important role in representing complex relationships in real-world
applications such as social networks, biological data and citation networks. In recent years …
applications such as social networks, biological data and citation networks. In recent years …
LLaVA-KD: A Framework of Distilling Multimodal Large Language Models
The success of Large Language Models (LLM) has led researchers to explore Multimodal
Large Language Models (MLLM) for unified visual and linguistic understanding. However …
Large Language Models (MLLM) for unified visual and linguistic understanding. However …
Neuro-Inspired Information-Theoretic Hierarchical Perception for Multimodal Learning
Integrating and processing information from various sources or modalities are critical for
obtaining a comprehensive and accurate perception of the real world in autonomous …
obtaining a comprehensive and accurate perception of the real world in autonomous …
MinPrompt: Graph-based minimal prompt data augmentation for few-shot question answering
Few-shot question answering (QA) aims at achieving satisfactory results on machine
question answering when only a few training samples are available. Recent advances …
question answering when only a few training samples are available. Recent advances …
Weighted-Reward Preference Optimization for Implicit Model Fusion
While fusing heterogeneous open-source LLMs with varying architectures and sizes can
potentially integrate the strengths of different models, existing fusion methods face …
potentially integrate the strengths of different models, existing fusion methods face …