Emotional chatting machine: Emotional conversation generation with internal and external memory H Zhou, M Huang, T Zhang, X Zhu, B Liu Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018 | 953 | 2018 |
Commonsense knowledge aware conversation generation with graph attention. H Zhou, T Young, M Huang, H Zhao, J Xu, X Zhu IJCAI 18, 4623-4629, 2018 | 571 | 2018 |
Augmenting end-to-end dialogue systems with commonsense knowledge T Young, E Cambria, I Chaturvedi, H Zhou, S Biswas, M Huang Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018 | 385 | 2018 |
Large language models are not robust multiple choice selectors C Zheng, H Zhou, F Meng, J Zhou, M Huang arXiv preprint arXiv:2309.03882, 2023 | 170 | 2023 |
Label words are anchors: An information flow perspective for understanding in-context learning L Wang, L Li, D Dai, D Chen, H Zhou, F Meng, J Zhou, X Sun arXiv preprint arXiv:2305.14160, 2023 | 140 | 2023 |
CPM: A large-scale generative Chinese pre-trained language model Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ... AI Open 2, 93-99, 2021 | 124 | 2021 |
KdConv: A Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation H Zhou, C Zheng, K Huang, M Huang, X Zhu arXiv preprint arXiv:2004.04100, 2020 | 124 | 2020 |
On the safety of conversational models: Taxonomy, dataset, and benchmark H Sun, G Xu, J Deng, J Cheng, C Zheng, H Zhou, N Peng, X Zhu, ... arXiv preprint arXiv:2110.08466, 2021 | 82 | 2021 |
On prompt-driven safeguarding for large language models C Zheng, F Yin, H Zhou, F Meng, J Zhou, KW Chang, M Huang, N Peng arXiv preprint arXiv:2401.18018, 2024 | 66 | 2024 |
Eva: An open-domain chinese dialogue system with large-scale generative pre-training H Zhou, P Ke, Z Zhang, Y Gu, Y Zheng, C Zheng, Y Wang, CH Wu, H Sun, ... arXiv preprint arXiv:2108.01547, 2021 | 48 | 2021 |
Towards codable text watermarking for large language models L Wang, W Yang, D Chen, H Zhou, Y Lin, F Meng, J Zhou, X Sun arXiv preprint arXiv:2307.15992, 2023 | 47 | 2023 |
On large language models’ selection bias in multi-choice questions C Zheng, H Zhou, F Meng, J Zhou, M Huang arXiv preprint arXiv:2309.03882 4, 2023 | 45 | 2023 |
Context-aware natural language generation for spoken dialogue systems H Zhou, M Huang, X Zhu Proceedings of COLING 2016, the 26th International Conference on …, 2016 | 43 | 2016 |
Prompt-driven llm safeguarding via directed representation optimization C Zheng, F Yin, H Zhou, F Meng, J Zhou, KW Chang, M Huang, N Peng arXiv e-prints, arXiv: 2401.18018, 2024 | 41 | 2024 |
CTRLEval: An unsupervised reference-free metric for evaluating controlled text generation P Ke, H Zhou, Y Lin, P Li, J Zhou, X Zhu, M Huang arXiv preprint arXiv:2204.00862, 2022 | 29 | 2022 |
Domain-constrained advertising keyword generation H Zhou, M Huang, Y Mao, C Zhu, P Shu, X Zhu The World Wide Web Conference, 2448-2459, 2019 | 20 | 2019 |
Recall: A benchmark for llms robustness against external counterfactual knowledge Y Liu, L Huang, S Li, S Chen, H Zhou, F Meng, J Zhou, X Sun arXiv preprint arXiv:2311.08147, 2023 | 19 | 2023 |
EARL: Informative knowledge-grounded conversation generation with entity-agnostic representation learning H Zhou, M Huang, Y Liu, W Chen, X Zhu Proceedings of the 2021 conference on empirical methods in natural language …, 2021 | 15 | 2021 |
ROSE: Robust selective fine-tuning for pre-trained language models L Jiang, H Zhou, Y Lin, P Li, J Zhou, R Jiang arXiv preprint arXiv:2210.09658, 2022 | 8 | 2022 |
Diffusion theory as a scalpel: Detecting and purifying poisonous dimensions in pre-trained language models caused by backdoor or bias Z Zhang, D Chen, H Zhou, F Meng, J Zhou, X Sun arXiv preprint arXiv:2305.04547, 2023 | 7 | 2023 |