Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data E Jeong, S Oh, H Kim, J Park, M Bennis, SL Kim arXiv preprint arXiv:1811.11479, 2018 | 789 | 2018 |
Mix2FLD: Downlink federated learning after uplink federated distillation with two-way mixup S Oh, J Park, E Jeong, H Kim, M Bennis, SL Kim IEEE Communications Letters 24 (10), 2211-2215, 2020 | 69 | 2020 |
Distilling on-device intelligence at the network edge J Park, S Wang, A Elgabli, S Oh, E Jeong, H Cha, H Kim, SL Kim, ... arXiv preprint arXiv:1908.05895, 2019 | 37 | 2019 |
Multi-hop federated private data augmentation with sample compression E Jeong, S Oh, J Park, H Kim, M Bennis, SL Kim arXiv preprint arXiv:1907.06426, 2019 | 23 | 2019 |
Personalized decentralized federated learning with knowledge distillation E Jeong, M Kountouris ICC 2023-IEEE International Conference on Communications, 1982-1987, 2023 | 19 | 2023 |
Asynchronous decentralized learning over unreliable wireless networks E Jeong, M Zecchin, M Kountouris ICC 2022-IEEE International Conference on Communications, 607-612, 2022 | 18 | 2022 |
Hiding in the crowd: Federated data augmentation for on-device learning E Jeong, S Oh, J Park, H Kim, M Bennis, SL Kim IEEE Intelligent Systems 36 (5), 80-87, 2020 | 18 | 2020 |
Draco: Decentralized asynchronous federated learning over continuous row-stochastic network matrices E Jeong, M Kountouris arXiv preprint arXiv:2406.13533, 2024 | 1 | 2024 |
Communication-efficient decentralized learning for intelligent networked systems E Jeong Sorbonne Université, 2024 | | 2024 |