Robust neural information retrieval: An adversarial and out-of-distribution perspective

YA Liu, R Zhang, J Guo, M de Rijke, Y Fan… - arxiv preprint arxiv …, 2024 - arxiv.org
Recent advances in neural information retrieval (IR) models have significantly enhanced
their effectiveness over various IR tasks. The robustness of these models, essential for …

Robust information retrieval

YA Liu, R Zhang, J Guo, M de Rijke - … of the 47th International ACM SIGIR …, 2024 - dl.acm.org
Beyond effectiveness, the robustness of an information retrieval (IR) system is increasingly
attracting attention. When deployed, a critical technology such as IR should not only deliver …

Hallucination-minimized Data-to-answer Framework for Financial Decision-makers

S Roychowdhury, A Alvarez, B Moore… - … Conference on Big …, 2023 - ieeexplore.ieee.org
Large Language Models (LLMs) have been applied to build several automation and
personalized question-answering prototypes so far. However, scaling such prototypes to …

Data void exploits: Tracking & mitigation strategies

M Mannino, J Garcia, R Hazim, A Abouzied… - Proceedings of the 33rd …, 2024 - dl.acm.org
A data void is a gap in online information, providing an opportunity for the spread of
disinformation or a data void exploit. We introduce lightweight measures to track the …

[PDF][PDF] Into the dark: unveiling internal site search abused for black hat SEO

Y Zhang, M Liu, B Liu, Y Zhang, H Duan… - … (USENIX Security 24 …, 2024 - cypher-z.github.io
Abstract Internal site Search Abuse Promotion (ISAP) is a prevalent Black Hat Search
Engine Optimization (SEO) technique, which exploits the reputation of abused internal …

Invisible Threats: Backdoor Attack in OCR Systems

M Conti, N Farronato, S Koffas, L Pajola… - arxiv preprint arxiv …, 2023 - arxiv.org
Optical Character Recognition (OCR) is a widely used tool to extract text from scanned
documents. Today, the state-of-the-art is achieved by exploiting deep neural networks …

Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions

P Thota, S Nilizadeh - arxiv preprint arxiv:2410.20019, 2024 - arxiv.org
Large Language Models have introduced novel opportunities for text comprehension and
generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks …

Lights Toward Adversarial Machine Learning: The Achilles Heel of Artificial Intelligence

L Pajola, M Conti - IEEE Intelligent Systems, 2024 - ieeexplore.ieee.org
Artificial intelligence (AI)-based technologies are starting to be adopted in the industrial
world in many different contexts and sectors, from health care to the automotive, from …

Formalizing Robustness Against Character-Level Perturbations for Neural Network Language Models

Z Ma, X Feng, Z Wang, S Liu, M Ma, H Guan… - … Conference on Formal …, 2023 - Springer
The remarkable success of neural networks has led to a growing demand for robustness
verification and guarantee. However, the discrete nature of text data processed by language …