Multi-task deep recommender systems: A survey

Y Wang, HT Lam, Y Wong, Z Liu, X Zhao… - arxiv preprint arxiv …, 2023 - arxiv.org
Multi-task learning (MTL) aims at learning related tasks in a unified model to achieve mutual
improvement among tasks considering their shared knowledge. It is an important topic in …

MDFEND: Multi-domain fake news detection

Q Nan, J Cao, Y Zhu, Y Wang, J Li - Proceedings of the 30th ACM …, 2021 - dl.acm.org
Fake news spread widely on social media in various domains, which lead to real-world
threats in many aspects like politics, disasters, and finance. Most existing approaches focus …

Memory-guided multi-view multi-domain fake news detection

Y Zhu, Q Sheng, J Cao, Q Nan, K Shu… - … on Knowledge and …, 2022 - ieeexplore.ieee.org
The wide spread of fake news is increasingly threatening both individuals and society. Great
efforts have been made for automatic fake news detection on a single domain (eg, politics) …

Advances and challenges of multi-task learning method in recommender system: A survey

M Zhang, R Yin, Z Yang, Y Wang, K Li - arxiv preprint arxiv:2305.13843, 2023 - arxiv.org
Multi-task learning has been widely applied in computational vision, natural language
processing and other fields, which has achieved well performance. In recent years, a lot of …

Modeling the sequential dependence among audience multi-step conversions with multi-task learning in targeted display advertising

D **, Z Chen, P Yan, Y Zhang, Y Zhu… - Proceedings of the 27th …, 2021 - dl.acm.org
In most real-world large-scale online applications (eg, e-commerce or finance), customer
acquisition is usually a multi-step conversion process of audiences. For example, an …

Soft-label for multi-domain fake news detection

D Wang, W Zhang, W Wu, X Guo - IEEe Access, 2023 - ieeexplore.ieee.org
The spread of fake news across several fields has had serious negative impacts on the
public and society. Existing studies have shown that the use of multi-domain labels can …

Janus: A unified distributed training framework for sparse mixture-of-experts models

J Liu, JH Wang, Y Jiang - Proceedings of the ACM SIGCOMM 2023 …, 2023 - dl.acm.org
Scaling models to large sizes to improve performance has led a trend in deep learning, and
sparsely activated Mixture-of-Expert (MoE) is a promising architecture to scale models …

Unicorn: A unified multi-tasking model for supporting matching tasks in data integration

J Tu, J Fan, N Tang, P Wang, G Li, X Du, X Jia… - Proceedings of the ACM …, 2023 - dl.acm.org
Data matching-which decides whether two data elements (eg, string, tuple, column, or
knowledge graph entity) are the" same"(aka a match)-is a key concept in data integration …

Multi-modal mixture of experts represetation learning for sequential recommendation

S Bian, X Pan, WX Zhao, J Wang, C Wang… - Proceedings of the 32nd …, 2023 - dl.acm.org
Within online platforms, it is critical to capture the dynamic user preference from the
sequential interaction behaviors for making accurate recommendation over time. Recently …

STEM: unleashing the power of embeddings for multi-task recommendation

L Su, J Pan, X Wang, X **ao, S Quan, X Chen… - Proceedings of the …, 2024 - ojs.aaai.org
Multi-task learning (MTL) has gained significant popularity in recommender systems as it
enables the simultaneous optimization of multiple objectives. A key challenge in MTL is …