Prati
Hao Zhou
Hao Zhou
Wechat AI, Tencent
Potvrđena adresa e-pošte na tencent.com
Naslov
Citirano
Citirano
Godina
Emotional chatting machine: Emotional conversation generation with internal and external memory
H Zhou, M Huang, T Zhang, X Zhu, B Liu
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
9532018
Commonsense knowledge aware conversation generation with graph attention.
H Zhou, T Young, M Huang, H Zhao, J Xu, X Zhu
IJCAI 18, 4623-4629, 2018
5712018
Augmenting end-to-end dialogue systems with commonsense knowledge
T Young, E Cambria, I Chaturvedi, H Zhou, S Biswas, M Huang
Proceedings of the AAAI conference on artificial intelligence 32 (1), 2018
3852018
Large language models are not robust multiple choice selectors
C Zheng, H Zhou, F Meng, J Zhou, M Huang
arXiv preprint arXiv:2309.03882, 2023
1702023
Label words are anchors: An information flow perspective for understanding in-context learning
L Wang, L Li, D Dai, D Chen, H Zhou, F Meng, J Zhou, X Sun
arXiv preprint arXiv:2305.14160, 2023
1402023
CPM: A large-scale generative Chinese pre-trained language model
Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ...
AI Open 2, 93-99, 2021
1242021
KdConv: A Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation
H Zhou, C Zheng, K Huang, M Huang, X Zhu
arXiv preprint arXiv:2004.04100, 2020
1242020
On the safety of conversational models: Taxonomy, dataset, and benchmark
H Sun, G Xu, J Deng, J Cheng, C Zheng, H Zhou, N Peng, X Zhu, ...
arXiv preprint arXiv:2110.08466, 2021
822021
On prompt-driven safeguarding for large language models
C Zheng, F Yin, H Zhou, F Meng, J Zhou, KW Chang, M Huang, N Peng
arXiv preprint arXiv:2401.18018, 2024
662024
Eva: An open-domain chinese dialogue system with large-scale generative pre-training
H Zhou, P Ke, Z Zhang, Y Gu, Y Zheng, C Zheng, Y Wang, CH Wu, H Sun, ...
arXiv preprint arXiv:2108.01547, 2021
482021
Towards codable text watermarking for large language models
L Wang, W Yang, D Chen, H Zhou, Y Lin, F Meng, J Zhou, X Sun
arXiv preprint arXiv:2307.15992, 2023
472023
On large language models’ selection bias in multi-choice questions
C Zheng, H Zhou, F Meng, J Zhou, M Huang
arXiv preprint arXiv:2309.03882 4, 2023
452023
Context-aware natural language generation for spoken dialogue systems
H Zhou, M Huang, X Zhu
Proceedings of COLING 2016, the 26th International Conference on …, 2016
432016
Prompt-driven llm safeguarding via directed representation optimization
C Zheng, F Yin, H Zhou, F Meng, J Zhou, KW Chang, M Huang, N Peng
arXiv e-prints, arXiv: 2401.18018, 2024
412024
CTRLEval: An unsupervised reference-free metric for evaluating controlled text generation
P Ke, H Zhou, Y Lin, P Li, J Zhou, X Zhu, M Huang
arXiv preprint arXiv:2204.00862, 2022
292022
Domain-constrained advertising keyword generation
H Zhou, M Huang, Y Mao, C Zhu, P Shu, X Zhu
The World Wide Web Conference, 2448-2459, 2019
202019
Recall: A benchmark for llms robustness against external counterfactual knowledge
Y Liu, L Huang, S Li, S Chen, H Zhou, F Meng, J Zhou, X Sun
arXiv preprint arXiv:2311.08147, 2023
192023
EARL: Informative knowledge-grounded conversation generation with entity-agnostic representation learning
H Zhou, M Huang, Y Liu, W Chen, X Zhu
Proceedings of the 2021 conference on empirical methods in natural language …, 2021
152021
ROSE: Robust selective fine-tuning for pre-trained language models
L Jiang, H Zhou, Y Lin, P Li, J Zhou, R Jiang
arXiv preprint arXiv:2210.09658, 2022
82022
Diffusion theory as a scalpel: Detecting and purifying poisonous dimensions in pre-trained language models caused by backdoor or bias
Z Zhang, D Chen, H Zhou, F Meng, J Zhou, X Sun
arXiv preprint arXiv:2305.04547, 2023
72023
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20