Colbertv2: Effective and efficient retrieval via lightweight late interaction

K Santhanam, O Khattab, J Saad-Falcon… - arxiv preprint arxiv …, 2021‏ - arxiv.org
Neural information retrieval (IR) has greatly advanced search and other knowledge-
intensive language tasks. While many neural IR methods encode queries and documents …

FedNL: Making Newton-type methods applicable to federated learning

M Safaryan, R Islamov, X Qian, P Richtárik - arxiv preprint arxiv …, 2021‏ - arxiv.org
Inspired by recent work of Islamov et al (2021), we propose a family of Federated Newton
Learn (FedNL) methods, which we believe is a marked step in the direction of making …

Progfed: effective, communication, and computation efficient federated learning by progressive training

HP Wang, S Stich, Y He, M Fritz - … Conference on Machine …, 2022‏ - proceedings.mlr.press
Federated learning is a powerful distributed learning scheme that allows numerous edge
devices to collaboratively train a model without sharing their data. However, training is …

Linearly converging error compensated SGD

E Gorbunov, D Kovalev… - Advances in Neural …, 2020‏ - proceedings.neurips.cc
In this paper, we propose a unified analysis of variants of distributed SGD with arbitrary
compressions and delayed updates. Our framework is general enough to cover different …

EF21-P and friends: Improved theoretical communication complexity for distributed optimization with bidirectional compression

K Gruntkowska, A Tyurin… - … Conference on Machine …, 2023‏ - proceedings.mlr.press
In this work we focus our attention on distributed optimization problems in the context where
the communication time between the server and the workers is non-negligible. We obtain …

Federated learning via synthetic data

J Goetz, A Tewari - arxiv preprint arxiv:2008.04489, 2020‏ - arxiv.org
Federated learning allows for the training of a model using data on multiple clients without
the clients transmitting that raw data. However the standard method is to transmit model …

DoCoFL: Downlink compression for cross-device federated learning

R Dorfman, S Vargaftik… - … on Machine Learning, 2023‏ - proceedings.mlr.press
Many compression techniques have been proposed to reduce the communication overhead
of Federated Learning training procedures. However, these are typically designed for …

Analysis of error feedback in federated non-convex optimization with biased compression: Fast convergence and partial participation

X Li, P Li - International Conference on Machine Learning, 2023‏ - proceedings.mlr.press
In practical federated learning (FL) systems, the communication cost between the clients and
the central server can often be a bottleneck. In this paper, we focus on biased gradient …

A compressed gradient tracking method for decentralized optimization with linear convergence

Y Liao, Z Li, K Huang, S Pu - IEEE Transactions on Automatic …, 2022‏ - ieeexplore.ieee.org
Communication compression techniques are of growing interests for solving the
decentralized optimization problem under limited communication, where the global objective …

Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees

C Philippenko, A Dieuleveut - arxiv preprint arxiv:2006.14591, 2020‏ - arxiv.org
We introduce a framework-Artemis-to tackle the problem of learning in a distributed or
federated setting with communication constraints and device partial participation. Several …