Incentive mechanism for spatial crowdsourcing with unknown social-aware workers: A three-stage stackelberg game approach
In this paper, we investigate the incentive problem in Spatial Crowdsourcing (SC), where
mobile social-aware workers have unknown qualities and can share their answers to tasks …
mobile social-aware workers have unknown qualities and can share their answers to tasks …
Learning with Side Information: Elastic Multi-resource Control for the Open RAN
The open radio access network (O-RAN) architecture provides enhanced opportunities for
integrating machine learning in 5G/6G resource management by decomposing RAN …
integrating machine learning in 5G/6G resource management by decomposing RAN …
Minimizing entropy for crowdsourcing with combinatorial multi-armed bandit
Nowadays, crowdsourcing has become an increasingly popular paradigm for large-scale
data collection, annotation, and classification. Today's rapid growth of crowdsourcing …
data collection, annotation, and classification. Today's rapid growth of crowdsourcing …
Variance-adaptive algorithm for probabilistic maximum coverage bandits with general feedback
Probabilistic maximum coverage (PMC) is an important problem that can model many
network applications, including mobile crowdsensing, network content delivery, and …
network applications, including mobile crowdsensing, network content delivery, and …
Multi-armed bandits with costly probes
Multi-armed bandits is a sequential decision-making problem where an agent must choose
between multiple actions to maximize its cumulative reward over time, while facing …
between multiple actions to maximize its cumulative reward over time, while facing …
Artificial replay: a meta-algorithm for harnessing historical data in Bandits
How best to incorporate historical data to" warm start" bandit algorithms is an open question:
naively initializing reward estimates using all historical samples can suffer from spurious …
naively initializing reward estimates using all historical samples can suffer from spurious …
Multi-armed bandits with probing
We examine a K-armed multi-armed bandit problem involving probes, where the agent is
permitted to probe one arm for a cost c≧0 to observe its reward before making a pull. We …
permitted to probe one arm for a cost c≧0 to observe its reward before making a pull. We …
Variance-Aware Bandit Framework for Dynamic Probabilistic Maximum Coverage Problem With Triggered or Self-Reliant Arms
The Probabilistic Maximum Coverage (PMC) problem plays a pivotal role in modeling
various network applications, such as mobile crowdsensing, which involves selecting nodes …
various network applications, such as mobile crowdsensing, which involves selecting nodes …
Online learning and bandits with queried hints
We consider the classic online learning and stochastic multi-armed bandit (MAB) problems,
when at each step, the online policy can probe and find out which of a small number ($ k $) …
when at each step, the online policy can probe and find out which of a small number ($ k $) …
Efficient algorithms for multi-armed bandits with additional feedbacks: Modeling and algorithms
Multi-armed bandits (MAB) are widely applied to optimize networking applications such as
crowdsensing and mobile edge computing. Additional feedbacks (or partial feedbacks) on …
crowdsensing and mobile edge computing. Additional feedbacks (or partial feedbacks) on …