Generating Adversarial Texts by the Universal Tail Word Addition Attack
Deep neural networks (DNNs) are vulnerable to adversarial examples, which can mislead
models without affecting normal judgment of humans. In the image field, such adversarial …
models without affecting normal judgment of humans. In the image field, such adversarial …