Efficient and effective tree-based and neural learning to rank

S Bruch, C Lucchese, FM Nardini - Foundations and Trends® …, 2023 - nowpublishers.com
As information retrieval researchers, we not only develop algorithmic solutions to hard
problems, but we also insist on a proper, multifaceted evaluation of ideas. The literature on …

Soft Hybrid Knowledge Distillation against deep neural networks

J Zhang, Z Tao, S Zhang, Z Qiao, K Guo - Neurocomputing, 2024 - Elsevier
Traditional knowledge distillation approaches are typically designed for specific tasks, as
they primarily distilling deep features from intermediate layers of a neural network, generally …

A teacher-free graph knowledge distillation framework with dual self-distillation

L Wu, H Lin, Z Gao, G Zhao, SZ Li - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Recent years have witnessed great success in handling graph-related tasks with Graph
Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons …

Post-hoc selection of pareto-optimal solutions in search and recommendation

V Paparella, VW Anelli, FM Nardini, R Perego… - Proceedings of the …, 2023 - dl.acm.org
Information Retrieval (IR) and Recommender Systems (RSs) tasks are moving from
computing a ranking of final results based on a single metric to multi-objective problems …

Intra-channel nonlinearity mitigation in optical fiber transmission systems using perturbation-based neural network

J Ding, T Liu, T Xu, W Hu, S Popov… - Journal of Lightwave …, 2022 - opg.optica.org
In this work, a perturbation-based neural network (P-NN) scheme with an embedded
bidirectional long short-term memory (biLSTM) layer is investigated to compensate for the …

Learning to distill graph neural networks

C Yang, Y Guo, Y Xu, C Shi, J Liu, C Wang… - Proceedings of the …, 2023 - dl.acm.org
Graph Neural Networks (GNNs) can effectively capture both the topology and attribute
information of a graph, and have been extensively studied in many domains. Recently, there …

Multi-objective Learning to Rank by Model Distillation

J Tang, H Gao, L He, S Katariya - Proceedings of the 30th ACM SIGKDD …, 2024 - dl.acm.org
In online marketplaces, search ranking's objective is not only to purchase or conversion
(primary objective), but to also the purchase outcomes (secondary objectives), eg order …

A Self-Distilled Learning to Rank Model for Ad Hoc Retrieval

S Keshvari, F Saeedi, H Sadoghi Yazdi… - ACM Transactions on …, 2024 - dl.acm.org
Learning to rank models are broadly applied in ad hoc retrieval for scoring and sorting
documents based on their relevance to textual queries. The generalizability of the trained …

Neural network compression using binarization and few full-precision weights

FM Nardini, C Rulli, S Trani, R Venturini - arxiv preprint arxiv:2306.08960, 2023 - arxiv.org
Quantization and pruning are two effective Deep Neural Networks model compression
methods. In this paper, we propose Automatic Prune Binarization (APB), a novel …

ReNeuIR at SIGIR 2024: The Third Workshop on Reaching Efficiency in Neural Information Retrieval

M Fröbe, J Mackenzie, B Mitra, FM Nardini… - Proceedings of the 47th …, 2024 - dl.acm.org
The Information Retrieval (IR) community has a rich history of empirically measuring novel
retrieval methods in terms of effectiveness and efficiency. However, as the search ecosystem …