Следене
Bairu Hou
Bairu Hou
Потвърден имейл адрес: ucsb.edu - Начална страница
Заглавие
Позовавания
Позовавания
Година
Openattack: An open-source textual adversarial attack toolkit
G Zeng, F Qi, Q Zhou, T Zhang, Z Ma, B Hou, Y Zang, Z Liu, M Sun
arXiv preprint arXiv:2009.09191, 2020
1382020
A survey on data selection for language models
A Albalak, Y Elazar, SM Xie, S Longpre, N Lambert, X Wang, ...
arXiv preprint arXiv:2402.16827, 2024
862024
PromptBoosting: Black-Box Text Classification with Ten Forward Passes
B Hou, J O'Connor, J Andreas, S Chang, Y Zhang
ICML 2023, 2022
502022
Defending large language models against jailbreak attacks via semantic smoothing
J Ji, B Hou, A Robey, GJ Pappas, H Hassani, Y Zhang, E Wong, S Chang
arXiv preprint arXiv:2402.16192, 2024
412024
Decomposing uncertainty for large language models through input clarification ensembling
B Hou, Y Liu, K Qian, J Andreas, S Chang, Y Zhang
arXiv preprint arXiv:2311.08718, 2023
332023
Try to substitute: An unsupervised chinese word sense disambiguation method based on hownet
B Hou, F Qi, Y Zang, X Zhang, Z Liu, M Sun
Proceedings of the 28th international conference on computational …, 2020
282020
Improving diffusion models for scene text editing with dual encoders
J Ji, G Zhang, Z Wang, B Hou, Z Zhang, B Price, S Chang
arXiv preprint arXiv:2304.05568, 2023
252023
Textgrad: Advancing robustness evaluation in nlp by gradient-driven optimization
B Hou, J Jia, Y Zhang, G Zhang, Y Zhang, S Liu, S Chang
arXiv preprint arXiv:2212.09254, 2022
212022
Certified robustness for large language models with self-denoising
Z Zhang, G Zhang, B Hou, W Fan, Q Li, S Liu, Y Zhang, S Chang
arXiv preprint arXiv:2307.07171, 2023
202023
Learning to attack: Towards textual adversarial attacking in real-world situations
Y Zang, B Hou, F Qi, Z Liu, X Meng, M Sun
arXiv preprint arXiv:2009.09192, 2020
142020
A survey on data selection for language models, 2024
A Albalak, Y Elazar, SM Xie, S Longpre, N Lambert, X Wang, ...
URL https://arxiv. org/abs/2402.16827, 2022
72022
Advancing the robustness of large language models through self-denoised smoothing
J Ji, B Hou, Z Zhang, G Zhang, W Fan, Q Li, Y Zhang, G Liu, S Liu, ...
arXiv preprint arXiv:2404.12274, 2024
52024
A probabilistic framework for llm hallucination detection via belief tree propagation
B Hou, Y Zhang, J Andreas, S Chang
arXiv preprint arXiv:2406.06950, 2024
42024
Instruction-Following Pruning for Large Language Models
B Hou, Q Chen, J Wang, G Yin, C Wang, N Du, R Pang, S Chang, T Lei
arXiv preprint arXiv:2501.02086, 2025
2025
ConDS: Context Distribution Shift for Robust In-Context Learning
S Yu, S Ahn, S Liang, B Hou, J Ji, S Chang, J Zhou
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–15