Attentionviz: A global view of transformer attention
Transformer models are revolutionizing machine learning, but their inner workings remain
mysterious. In this work, we present a new visualization technique designed to help …
mysterious. In this work, we present a new visualization technique designed to help …
Conceptexplainer: Interactive explanation for deep neural networks from a concept perspective
Traditional deep learning interpretability methods which are suitable for model users cannot
explain network behaviors at the global level and are inflexible at providing fine-grained …
explain network behaviors at the global level and are inflexible at providing fine-grained …
One wide feedforward is all you need
The Transformer architecture has two main non-embedding components: Attention and the
Feed Forward Network (FFN). Attention captures interdependencies between words …
Feed Forward Network (FFN). Attention captures interdependencies between words …
Visual comparison of language model adaptation
Neural language models are widely used; however, their model parameters often need to be
adapted to the specific domains and tasks of an application, which is time-and resource …
adapted to the specific domains and tasks of an application, which is time-and resource …
Emblaze: Illuminating machine learning representations through interactive comparison of embedding spaces
Modern machine learning techniques commonly rely on complex, high-dimensional
embedding representations to capture underlying structure in the data and improve …
embedding representations to capture underlying structure in the data and improve …
VA+ Embeddings STAR: A State‐of‐the‐Art Report on the Use of Embeddings in Visual Analytics
Over the past years, an increasing number of publications in information visualization,
especially within the field of visual analytics, have mentioned the term “embedding” when …
especially within the field of visual analytics, have mentioned the term “embedding” when …
A perspective on complexity and networks science
G Caldarelli - Journal of Physics: Complexity, 2020 - iopscience.iop.org
Complexity and network science are nowadays used, or at least invoked, in a variety of
scientific researchareas ranging from the analysis of financial systems to social structure and …
scientific researchareas ranging from the analysis of financial systems to social structure and …
Class-constrained t-sne: Combining data features and class probabilities
Data features and class probabilities are two main perspectives when, eg, evaluating model
results and identifying problematic items. Class probabilities represent the likelihood that …
results and identifying problematic items. Class probabilities represent the likelihood that …
Explaining contextualization in language models using visual analytics
Despite the success of contextualized language models on various NLP tasks, it is still
unclear what these models really learn. In this paper, we contribute to the current efforts of …
unclear what these models really learn. In this paper, we contribute to the current efforts of …
Intuitively assessing ml model reliability through example-based explanations and editing model inputs
Interpretability methods aim to help users build trust in and understand the capabilities of
machine learning models. However, existing approaches often rely on abstract, complex …
machine learning models. However, existing approaches often rely on abstract, complex …