Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Security and privacy challenges of large language models: A survey
Large language models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language …
contributed to multiple fields, such as generating and summarizing text, language …
A survey on large language models with multilingualism: Recent advances and new frontiers
The rapid development of Large Language Models (LLMs) demonstrates remarkable
multilingual capabilities in natural language processing, attracting global attention in both …
multilingual capabilities in natural language processing, attracting global attention in both …
Metamath: Bootstrap your own mathematical questions for large language models
Large language models (LLMs) have pushed the limits of natural language understanding
and exhibited excellent problem-solving ability. Despite the great success, most existing …
and exhibited excellent problem-solving ability. Despite the great success, most existing …
Figstep: Jailbreaking large vision-language models via typographic visual prompts
Large Vision-Language Models (LVLMs) signify a groundbreaking paradigm shift within the
Artificial Intelligence (AI) community, extending beyond the capabilities of Large Language …
Artificial Intelligence (AI) community, extending beyond the capabilities of Large Language …
Artprompt: Ascii art-based jailbreak attacks against aligned llms
Safety is critical to the usage of large language models (LLMs). Multiple techniques such as
data filtering and supervised fine-tuning have been developed to strengthen LLM safety …
data filtering and supervised fine-tuning have been developed to strengthen LLM safety …
Improved few-shot jailbreaking can circumvent aligned language models and their defenses
Abstract Recently, Anil et al.(2024) show that many-shot (up to hundreds of) demonstrations
can jailbreak state-of-the-art LLMs by exploiting their long-context capability. Nevertheless …
can jailbreak state-of-the-art LLMs by exploiting their long-context capability. Nevertheless …
Safedecoding: Defending against jailbreak attacks via safety-aware decoding
As large language models (LLMs) become increasingly integrated into real-world
applications such as code generation and chatbot assistance, extensive efforts have been …
applications such as code generation and chatbot assistance, extensive efforts have been …
Red-Teaming for generative AI: Silver bullet or security theater?
In response to rising concerns surrounding the safety, security, and trustworthiness of
Generative AI (GenAI) models, practitioners and regulators alike have pointed to AI red …
Generative AI (GenAI) models, practitioners and regulators alike have pointed to AI red …
Cold-attack: Jailbreaking llms with stealthiness and controllability
Jailbreaks on large language models (LLMs) have recently received increasing attention.
For a comprehensive assessment of LLM safety, it is essential to consider jailbreaks with …
For a comprehensive assessment of LLM safety, it is essential to consider jailbreaks with …
Jailbreak attacks and defenses against large language models: A survey
Large Language Models (LLMs) have performed exceptionally in various text-generative
tasks, including question answering, translation, code completion, etc. However, the over …
tasks, including question answering, translation, code completion, etc. However, the over …