Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Unleashing the potential of prompt engineering in large language models: a comprehensive review
This comprehensive review delves into the pivotal role of prompt engineering in unleashing
the capabilities of Large Language Models (LLMs). The development of Artificial Intelligence …
the capabilities of Large Language Models (LLMs). The development of Artificial Intelligence …
Generalized out-of-distribution detection and beyond in vision language model era: A survey
Detecting out-of-distribution (OOD) samples is crucial for ensuring the safety of machine
learning systems and has shaped the field of OOD detection. Meanwhile, several other …
learning systems and has shaped the field of OOD detection. Meanwhile, several other …
Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization
J Abdul Samadh, MH Gani, N Hussein… - Advances in …, 2023 - proceedings.neurips.cc
The promising zero-shot generalization of vision-language models such as CLIP has led to
their adoption using prompt learning for numerous downstream tasks. Previous works have …
their adoption using prompt learning for numerous downstream tasks. Previous works have …
Dual memory networks: A versatile adaptation approach for vision-language models
With the emergence of pre-trained vision-language models like CLIP how to adapt them to
various downstream classification tasks has garnered significant attention in recent …
various downstream classification tasks has garnered significant attention in recent …
Promptkd: Unsupervised prompt distillation for vision-language models
Prompt learning has emerged as a valuable technique in enhancing vision-language
models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly …
models (VLMs) such as CLIP for downstream tasks in specific domains. Existing work mainly …
Continual learning with pre-trained models: A survey
Nowadays, real-world applications often face streaming data, which requires the learning
system to absorb new knowledge as data evolves. Continual Learning (CL) aims to achieve …
system to absorb new knowledge as data evolves. Continual Learning (CL) aims to achieve …
Mma: Multi-modal adapter for vision-language models
Abstract Pre-trained Vision-Language Models (VLMs) have served as excellent foundation
models for transfer learning in diverse downstream tasks. However tuning VLMs for few-shot …
models for transfer learning in diverse downstream tasks. However tuning VLMs for few-shot …
Dept: Decoupled prompt tuning
This work breaks through the Base-New Tradeoff (BNT) dilemma in prompt tuning ie the
better the tuned model generalizes to the base (or target) task the worse it generalizes to …
better the tuned model generalizes to the base (or target) task the worse it generalizes to …
Tcp: Textual-based class-aware prompt tuning for visual-language model
Prompt tuning represents a valuable technique for adapting pre-trained visual-language
models (VLM) to various downstream tasks. Recent advancements in CoOp-based methods …
models (VLM) to various downstream tasks. Recent advancements in CoOp-based methods …
Exploring regional clues in CLIP for zero-shot semantic segmentation
CLIP has demonstrated marked progress in visual recognition due to its powerful pre-
training on large-scale image-text pairs. However it still remains a critical challenge: how to …
training on large-scale image-text pairs. However it still remains a critical challenge: how to …