Suivre
Zhixue Zhao
Zhixue Zhao
Autres nomsCass Zhixue Zhao
Adresse e-mail validée de sheffield.ac.uk - Page d'accueil
Titre
Citée par
Citée par
Année
A comparative study of using pre-trained language models for toxic comment classification
Z Zhao, Z Zhang, F Hopfgartner
Companion Proceedings of the Web Conference 2021, 500-507, 2021
602021
On the Impact of Temporal Concept Drift on Model Explanations
Z Zhao, G Chrysostomou, K Bontcheva, N Aletras
The 2022 Conference on Empirical Methods in Natural Language Processing …, 2022
122022
SS-BERT: Mitigating Identity Terms Bias in Toxic Comment Classification by Utilising the Notion of" Subjectivity" and" Identity Terms"
Z Zhao, Z Zhang, F Hopfgartner
arXiv preprint arXiv:2109.02691, 2021
102021
Incorporating Attribution Importance for Improving Faithfulness Metrics
Z Zhao, A Nikolaos
61st Annual Meeting of the Association for Computational Linguistics 1 (Long …, 2023
92023
Utilizing subjectivity level to mitigate identity term bias in toxic comments classification
Z Zhao, Z Zhang, F Hopfgartner
Online Social Networks and Media 29, 100205, 2022
82022
Detecting toxic content online and the effect of training data on classification performance
Z Zhao, Z Zhang, F Hopfgartner
CICLing 2019, 2019
72019
Investigating hallucinations in pruned large language models for abstractive summarization
G Chrysostomou, Z Zhao, M Williams, N Aletras
Transactions of the Association for Computational Linguistics 12, 1163-1181, 2024
52024
ReAGent: Towards A Model-agnostic Feature Attribution Method for Generative Language Models
Z Zhao, B Shan
ReLM at AAAI24, 2024
42024
Detecting Edited Knowledge in Language Models
P Youssef, Z Zhao, J Schlötterer, C Seifert
arXiv preprint arXiv:2405.02765, 2024
32024
Using Pre-trained Language Models for Toxic Comment Classification
Z Zhao
University of Sheffield, 2022
32022
Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models
Z Zhao, N Aletras
NAACL 2024, 2024
22024
Department of Computer Science, University of Sheffield
Z Zhao, B Shan
22024
ScImage: How Good Are Multimodal Large Language Models at Scientific Text-to-Image Generation?
L Zhang, S Eger, Y Cheng, W Zhai, J Belouadi, C Leiter, SP Ponzetto, ...
ICLR2025, 2024
12024
Language-specific Calibration for Pruning Multilingual Language Models
S Kurz, JJ Chen, L Flek, Z Zhao
arXiv e-prints, arXiv: 2408.14398, 2024
12024
Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization
G Chrysostomou, Z Zhao, M Williams, N Aletras
arXiv preprint arXiv:2311.09335, 2023
12023
Position: Editing Large Language Models Poses Serious Safety Risks
P Youssef, Z Zhao, D Braun, J Schlötterer, C Seifert
arXiv preprint arXiv:2502.02958, 2025
2025
Exploring Vision Language Models for Multimodal and Multilingual Stance Detection
J Vasilakes, C Scarton, Z Zhao
arXiv preprint arXiv:2501.17654, 2025
2025
Do LLMs Provide Consistent Answers to Health-Related Questions across Languages?
IB Schlicht, Z Zhao, B Sayin, L Flek, P Rosso
arXiv preprint arXiv:2501.14719, 2025
2025
Do LLMs Provide Consistent Answers to Health-Related Questions across Languages?
I Baris Schlicht, Z Zhao, B Sayin, L Flek, P Rosso
arXiv e-prints, arXiv: 2501.14719, 2025
2025
Implicit Priors Editing in Stable Diffusion via Targeted Token Adjustment
F He, C Zhang, Z Zhao
arXiv preprint arXiv:2412.03400, 2024
2024
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20