Incentive mechanism for spatial crowdsourcing with unknown social-aware workers: A three-stage stackelberg game approach

Y Xu, M **ao, J Wu, S Zhang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In this paper, we investigate the incentive problem in Spatial Crowdsourcing (SC), where
mobile social-aware workers have unknown qualities and can share their answers to tasks …

Learning with Side Information: Elastic Multi-resource Control for the Open RAN

X Zhang, J Zuo, Z Huang, Z Zhou… - IEEE Journal on …, 2023 - ieeexplore.ieee.org
The open radio access network (O-RAN) architecture provides enhanced opportunities for
integrating machine learning in 5G/6G resource management by decomposing RAN …

Minimizing entropy for crowdsourcing with combinatorial multi-armed bandit

Y Song, H ** - IEEE INFOCOM 2021-IEEE Conference on …, 2021 - ieeexplore.ieee.org
Nowadays, crowdsourcing has become an increasingly popular paradigm for large-scale
data collection, annotation, and classification. Today's rapid growth of crowdsourcing …

Variance-adaptive algorithm for probabilistic maximum coverage bandits with general feedback

X Liu, J Zuo, H **e, C Joe-Wong… - IEEE INFOCOM 2023 …, 2023 - ieeexplore.ieee.org
Probabilistic maximum coverage (PMC) is an important problem that can model many
network applications, including mobile crowdsensing, network content delivery, and …

Multi-armed bandits with costly probes

EC Elumar, C Tekin, O Yağan - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Multi-armed bandits is a sequential decision-making problem where an agent must choose
between multiple actions to maximize its cumulative reward over time, while facing …

Artificial replay: a meta-algorithm for harnessing historical data in Bandits

S Banerjee, SR Sinclair, M Tambe, L Xu… - arxiv preprint arxiv …, 2022 - arxiv.org
How best to incorporate historical data to" warm start" bandit algorithms is an open question:
naively initializing reward estimates using all historical samples can suffer from spurious …

Multi-armed bandits with probing

EC Elumar, C Tekin, O Yağan - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
We examine a K-armed multi-armed bandit problem involving probes, where the agent is
permitted to probe one arm for a cost c≧0 to observe its reward before making a pull. We …

Variance-Aware Bandit Framework for Dynamic Probabilistic Maximum Coverage Problem With Triggered or Self-Reliant Arms

X Dai, X Liu, J Zuo, H **e, C Joe-Wong… - IEEE Transactions on …, 2025 - ieeexplore.ieee.org
The Probabilistic Maximum Coverage (PMC) problem plays a pivotal role in modeling
various network applications, such as mobile crowdsensing, which involves selecting nodes …

Online learning and bandits with queried hints

A Bhaskara, S Gollapudi, S Im, K Kollias… - arxiv preprint arxiv …, 2022 - arxiv.org
We consider the classic online learning and stochastic multi-armed bandit (MAB) problems,
when at each step, the online policy can probe and find out which of a small number ($ k $) …

Efficient algorithms for multi-armed bandits with additional feedbacks: Modeling and algorithms

H **e, H Gu, Z Qi - Information Sciences, 2023 - Elsevier
Multi-armed bandits (MAB) are widely applied to optimize networking applications such as
crowdsensing and mobile edge computing. Additional feedbacks (or partial feedbacks) on …