Natural language processing: state of the art, current trends and challenges

D Khurana, A Koli, K Khatter, S Singh - Multimedia tools and applications, 2023 - Springer
Natural language processing (NLP) has recently gained much attention for representing and
analyzing human language computationally. It has spread its applications in various fields …

A review of machine learning approaches to spam filtering

TS Guzella, WM Caminhas - Expert Systems with Applications, 2009 - Elsevier
In this paper, we present a comprehensive review of recent developments in the application
of machine learning algorithms to Spam filtering, focusing on both textual-and image-based …

Explainable artificial intelligence in cybersecurity: A survey

N Capuano, G Fenza, V Loia, C Stanzione - Ieee Access, 2022 - ieeexplore.ieee.org
Nowadays, Artificial Intelligence (AI) is widely applied in every area of human being's daily
life. Despite the AI benefits, its application suffers from the opacity of complex internal …

A survey on machine learning techniques for cyber security in the last decade

K Shaukat, S Luo, V Varadharajan, IA Hameed… - IEEE …, 2020 - ieeexplore.ieee.org
Pervasive growth and usage of the Internet and mobile applications have expanded
cyberspace. The cyberspace has become more vulnerable to automated and prolonged …

Weight poisoning attacks on pre-trained models

K Kurita, P Michel, G Neubig - arxiv preprint arxiv:2004.06660, 2020 - arxiv.org
Recently, NLP has seen a surge in the usage of large pre-trained models. Users download
weights of models pre-trained on large datasets, then fine-tune the weights on a task of their …

A unified evaluation of textual backdoor learning: Frameworks and benchmarks

G Cui, L Yuan, B He, Y Chen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a
backdoor in the training phase, the adversary could control model predictions via predefined …

Backdoor attacks on pre-trained models by layerwise weight poisoning

L Li, D Song, X Li, J Zeng, R Ma, X Qiu - arxiv preprint arxiv:2108.13888, 2021 - arxiv.org
\textbf {P} re-\textbf {T} rained\textbf {M} odel\textbf {s} have been widely applied and recently
proved vulnerable under backdoor attacks: the released pre-trained weights can be …

Backdoor pre-trained models can transfer to all

L Shen, S Ji, X Zhang, J Li, J Chen, J Shi… - arxiv preprint arxiv …, 2021 - arxiv.org
Pre-trained general-purpose language models have been a dominating component in
enabling real-world natural language processing (NLP) applications. However, a pre-trained …

Plmmark: a secure and robust black-box watermarking framework for pre-trained language models

P Li, P Cheng, F Li, W Du, H Zhao, G Liu - Proceedings of the AAAI …, 2023 - ojs.aaai.org
The huge training overhead, considerable commercial value, and various potential security
risks make it urgent to protect the intellectual property (IP) of Deep Neural Networks (DNNs) …

[Књига][B] The text mining handbook: advanced approaches in analyzing unstructured data

R Feldman, J Sanger - 2007 - books.google.com
Text mining is a new and exciting area of computer science research that tries to solve the
crisis of information overload by combining techniques from data mining, machine learning …