Aligning distillation for cold-start item recommendation

F Huang, Z Wang, X Huang, Y Qian, Z Li… - Proceedings of the 46th …, 2023 - dl.acm.org
Recommending cold items in recommendation systems is a longstanding challenge due to
the inherent differences between warm items, which are recommended based on user …

User response prediction in online advertising

Z Gharibshah, X Zhu - aCM Computing Surveys (CSUR), 2021 - dl.acm.org
Online advertising, as a vast market, has gained significant attention in various platforms
ranging from search engines, third-party websites, social media, and mobile apps. The …

[HTML][HTML] Information retrieval meets large language models: a strategic report from chinese ir community

Q Ai, T Bai, Z Cao, Y Chang, J Chen, Z Chen, Z Cheng… - AI Open, 2023 - Elsevier
The research field of Information Retrieval (IR) has evolved significantly, expanding beyond
traditional search to meet diverse user information needs. Recently, Large Language …

A general knowledge distillation framework for counterfactual recommendation via uniform data

D Liu, P Cheng, Z Dong, X He, W Pan… - Proceedings of the 43rd …, 2020 - dl.acm.org
Recommender systems are feedback loop systems, which often face bias problems such as
popularity bias, previous model bias and position bias. In this paper, we focus on solving the …

Denoising and prompt-tuning for multi-behavior recommendation

C Zhang, R Chen, X Zhao, Q Han, L Li - Proceedings of the ACM web …, 2023 - dl.acm.org
In practical recommendation scenarios, users often interact with items under multi-typed
behaviors (eg, click, add-to-cart, and purchase). Traditional collaborative filtering techniques …

Cross-task knowledge distillation in multi-task recommendation

C Yang, J Pan, X Gao, T Jiang, D Liu… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Multi-task learning (MTL) has been widely used in recommender systems, wherein
predicting each type of user feedback on items (eg, click, purchase) are treated as individual …