Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
The rational speech act framework
J Degen - Annual Review of Linguistics, 2023 - annualreviews.org
The past decade has seen the rapid development of a new approach to pragmatics that
attempts to integrate insights from formal and experimental semantics and pragmatics …
attempts to integrate insights from formal and experimental semantics and pragmatics …
The challenges and prospects of brain-based prediction of behaviour
Relating individual brain patterns to behaviour is fundamental in system neuroscience.
Recently, the predictive modelling approach has become increasingly popular, largely due …
Recently, the predictive modelling approach has become increasingly popular, largely due …
Self-instruct: Aligning language models with self-generated instructions
Large" instruction-tuned" language models (ie, finetuned to respond to instructions) have
demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they …
demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they …
Llm-planner: Few-shot grounded planning for embodied agents with large language models
This study focuses on using large language models (LLMs) as a planner for embodied
agents that can follow natural language instructions to complete complex tasks in a visually …
agents that can follow natural language instructions to complete complex tasks in a visually …
Visual language maps for robot navigation
Grounding language to the visual observations of a navigating agent can be performed
using off-the-shelf visual-language models pretrained on Internet-scale data (eg, image …
using off-the-shelf visual-language models pretrained on Internet-scale data (eg, image …
Language models as zero-shot planners: Extracting actionable knowledge for embodied agents
Can world knowledge learned by large language models (LLMs) be used to act in
interactive environments? In this paper, we investigate the possibility of grounding high-level …
interactive environments? In this paper, we investigate the possibility of grounding high-level …
How much can clip benefit vision-and-language tasks?
Most existing Vision-and-Language (V&L) models rely on pre-trained visual encoders, using
a relatively small set of manually-annotated data (as compared to web-crawled data), to …
a relatively small set of manually-annotated data (as compared to web-crawled data), to …
History aware multimodal transformer for vision-and-language navigation
Vision-and-language navigation (VLN) aims to build autonomous visual agents that follow
instructions and navigate in real scenes. To remember previously visited locations and …
instructions and navigate in real scenes. To remember previously visited locations and …
Navgpt: Explicit reasoning in vision-and-language navigation with large language models
Trained with an unprecedented scale of data, large language models (LLMs) like ChatGPT
and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such …
and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such …
Think global, act local: Dual-scale graph transformer for vision-and-language navigation
Following language instructions to navigate in unseen environments is a challenging
problem for autonomous embodied agents. The agent not only needs to ground languages …
problem for autonomous embodied agents. The agent not only needs to ground languages …