Grad-sam: Explaining transformers via gradient self-attention maps
Transformer-based language models significantly advanced the state-of-the-art in many
linguistic tasks. As this revolution continues, the ability to explain model predictions has …
linguistic tasks. As this revolution continues, the ability to explain model predictions has …
Representation biases in sentence transformers
Variants of the BERT architecture specialised for producing full-sentence representations
often achieve better performance on downstream tasks than sentence embeddings …
often achieve better performance on downstream tasks than sentence embeddings …
Counterfactual evaluation for explainable AI
While recent years have witnessed the emergence of various explainable methods in
machine learning, to what degree the explanations really represent the reasoning process …
machine learning, to what degree the explanations really represent the reasoning process …
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
E Hauon - 2023 - search.proquest.com
Transformer-based language models significantly advanced the state-of-the-art in many
linguistic tasks. As this revolution continues, the ability to explain model predictions has …
linguistic tasks. As this revolution continues, the ability to explain model predictions has …