A survey of adversarial defenses and robustness in nlp
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …
resilient enough to withstand adversarial perturbations in input data, leaving them …
[HTML][HTML] Large language models in radiology: fundamentals, applications, ethical considerations, risks, and future directions
With the advent of large language models (LLMs), the artificial intelligence revolution in
medicine and radiology is now more tangible than ever. Every day, an increasingly large …
medicine and radiology is now more tangible than ever. Every day, an increasingly large …
Large language models can be strong differentially private learners
Differentially Private (DP) learning has seen limited success for building large deep learning
models of text, and straightforward attempts at applying Differentially Private Stochastic …
models of text, and straightforward attempts at applying Differentially Private Stochastic …
Five sources of bias in natural language processing
Recently, there has been an increased interest in demographically grounded bias in natural
language processing (NLP) applications. Much of the recent work has focused on describing …
language processing (NLP) applications. Much of the recent work has focused on describing …
Null it out: Guarding protected attributes by iterative nullspace projection
The ability to control for the kinds of information encoded in neural representation has a
variety of use cases, especially in light of the challenge of interpreting these models. We …
variety of use cases, especially in light of the challenge of interpreting these models. We …
Pile of law: Learning responsible data filtering from the law and a 256gb open-source legal dataset
One concern with the rise of large language models lies with their potential for significant
harm, particularly from pretraining on biased, obscene, copyrighted, and private information …
harm, particularly from pretraining on biased, obscene, copyrighted, and private information …
Predictive biases in natural language processing models: A conceptual framework and overview
An increasing number of works in natural language processing have addressed the effect of
bias on the predicted outcomes, introducing mitigation techniques that act on different parts …
bias on the predicted outcomes, introducing mitigation techniques that act on different parts …
Information leakage in embedding models
Embeddings are functions that map raw input data to low-dimensional vector
representations, while preserving important semantic information about the inputs. Pre …
representations, while preserving important semantic information about the inputs. Pre …
Privacy-preserving prompt tuning for large language model services
Prompt tuning provides an efficient way for users to customize Large Language Models
(LLMs) with their private data in the emerging LLM service scenario. However, the sensitive …
(LLMs) with their private data in the emerging LLM service scenario. However, the sensitive …
Personal llm agents: Insights and survey about the capability, efficiency and security
Since the advent of personal computing devices, intelligent personal assistants (IPAs) have
been one of the key technologies that researchers and engineers have focused on, aiming …
been one of the key technologies that researchers and engineers have focused on, aiming …