How Optimal Transport Can Tackle Gender Biases in Multi-Class Neural Network Classifiers for Job Recommendations F Jourdan, TT Kaninku, N Asher, JM Loubes, L Risser Algorithms 16 (3), 174, 2023 | 12 | 2023 |
COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP tasks F Jourdan, A Picard, T Fel, L Risser, JM Loubes, N Asher Findings of the Association for Computational Linguistics: ACL 2023, 5120–5136, 2023 | 10 | 2023 |
Are fairness metric scores enough to assess discrimination biases in machine learning? F Jourdan, L Risser, JM Loubes, N Asher Third Workshop on Trustworthy Natural Language Processing: ACL 2023, 2023 | 8 | 2023 |
Taco: Targeted concept erasure prevents non-linear classifiers from detecting protected attributes F Jourdan, L Béthune, A Picard, L Risser, N Asher arXiv preprint arXiv:2312.06499, 2023 | 1 | 2023 |
TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability F Jourdan, L Béthune, A Picard, L Risser, N Asher arXiv preprint arXiv:2312.06499, 2023 | 1 | 2023 |
ConSim: Measuring Concept-Based Explanations' Effectiveness with Automated Simulatability A Poché, A Jacovi, AM Picard, V Boutin, F Jourdan arXiv preprint arXiv:2501.05855, 2025 | | 2025 |
Advancing fairness in natural language processing: from traditional methods to explainability F Jourdan Université de Toulouse, 2024 | | 2024 |
Breaking Bias: How Optimal Transport Can Help to Tackle Gender Biases in NLP Based Job Recommendation Systems? F Jourdan, TT Kaninku, N Asher, JM Loubes, L Risser EWAF, 2023 | | 2023 |