Adversarial attacks of vision tasks in the past 10 years: A survey

C Zhang, X Xu, J Wu, Z Liu, L Zhou - arxiv preprint arxiv:2410.23687, 2024‏ - arxiv.org
Adversarial attacks, which manipulate input data to undermine model availability and
integrity, pose significant security threats during machine learning inference. With the advent …

Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning

X Wang, T Chen, X Yang, Q Zhang, X Zhao… - arxiv preprint arxiv …, 2024‏ - arxiv.org
The open-sourcing of large language models (LLMs) accelerates application development,
innovation, and scientific progress. This includes both base models, which are pre-trained …