Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
From google gemini to openai q*(q-star): A survey of resha** the generative artificial intelligence (ai) research landscape
This comprehensive survey explored the evolving landscape of generative Artificial
Intelligence (AI), with a specific focus on the transformative impacts of Mixture of Experts …
Intelligence (AI), with a specific focus on the transformative impacts of Mixture of Experts …
A survey on lora of large language models
Y Mao, Y Ge, Y Fan, W Xu, Y Mi, Z Hu… - Frontiers of Computer …, 2025 - Springer
Abstract Low-Rank Adaptation (LoRA), which updates the dense neural network layers with
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning …
Mixture of cluster-conditional lora experts for vision-language instruction tuning
Instruction tuning of Large Vision-language Models (LVLMs) has revolutionized the
development of versatile models with zero-shot generalization across a wide range of …
development of versatile models with zero-shot generalization across a wide range of …
Interactive AI with retrieval-augmented generation for next generation networking
With the advance of artificial intelligence (AI), the concept of interactive AI (IAI) has been
introduced, which can interactively understand and respond not only to human user input …
introduced, which can interactively understand and respond not only to human user input …
Minedreamer: Learning to follow instructions via chain-of-imagination for simulated-world control
It is a long-lasting goal to design a generalist-embodied agent that can follow diverse
instructions in human-like ways. However, existing approaches often fail to steadily follow …
instructions in human-like ways. However, existing approaches often fail to steadily follow …
Minigpt-3d: Efficiently aligning 3d point clouds with large language models using 2d priors
Large 2D vision-language models (2D-LLMs) have gained significant attention by bridging
Large Language Models (LLMs) with images using a simple projector. Inspired by their …
Large Language Models (LLMs) with images using a simple projector. Inspired by their …
Harmonizing visual text comprehension and generation
In this work, we present TextHarmony, a unified and versatile multimodal generative model
proficient in comprehending and generating visual text. Simultaneously generating images …
proficient in comprehending and generating visual text. Simultaneously generating images …
LoRAMoE: Alleviating world knowledge forgetting in large language models via MoE-style plugin
Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling
them to align with human instructions and enhance their capabilities in downstream tasks …
them to align with human instructions and enhance their capabilities in downstream tasks …
Mixture of insightful experts (mote): The synergy of thought chains and expert mixtures in self-alignment
As the capabilities of large language models (LLMs) have expanded dramatically, aligning
these models with human values presents a significant challenge. Traditional alignment …
these models with human values presents a significant challenge. Traditional alignment …
Mome: Mixture of multimodal experts for generalist multimodal large language models
Multimodal large language models (MLLMs) have demonstrated impressive capabilities
across various vision-language tasks. However, a generalist MLLM typically underperforms …
across various vision-language tasks. However, a generalist MLLM typically underperforms …