TY - JOUR
T1 - Adversarial Bandits With Multi-User Delayed Feedback
T2 - Theory and Application
AU - Li, Yandi
AU - Guo, Jianxiong
AU - Li, Yupeng
AU - Wang, Tian
AU - Jia, Weijia
N1 - This work was supported in part by the National Key R&D Program of China under Grant 2022YFE0201400, in part by the National Natural Science Foundation of China (NSFC) under Grant 62202055, in part by the Start-up Fund from Beijing Normal University under Grant 310432104, in part by the Start-up Fund from BNU-HKBU United International College under Grant UICR0700018-22, in part by the Project of Young Innovative Talents of Guangdong Education Department under Grant 2022KQNCX102, and in part by the Interdisciplinary Intelligence SuperComputer Center, Beijing Normal University (Zhuhai).
Publisher Copyright:
© 2024 IEEE.
PY - 2024/10
Y1 - 2024/10
N2 - The multi-armed bandit (MAB) models have attracted significant research attention due to their applicability and effectiveness in various real-world scenarios such as resource allocation in uncertain environments, online advertising, and dynamic pricing. As an important branch, the adversarial multi-armed bandit problems with delayed feedback have been proposed and studied by many researchers recently where a conceptual adversary strategically selects the reward distributions associated with each arm to challenge the learning algorithm and the agent experiences a bunch of delays in receiving the corresponding reward feedback from different users after taking an action on them. However, the existing models restrict the feedback to being generated from only one user, which makes models inapplicable to the prevailing scenarios of multiple users (e.g., ad recommendation for a group of users). In this paper, we consider that the delayed feedback results are from multiple users and are unrestricted on internal distribution while the feedback delay is arbitrary and unknown to the player in advance. Also, for different users in a round, the delays in feedback have no assumption of latent correlation. Thus, we formulate an adversarial multi-armed bandit problem with multi-user delayed feedback and design a modified EXP3 algorithm named MUD-EXP3, which makes a decision at each round by considering the importance-weighted estimator of the received feedback from different users. On the premise of known terminal round index TT, the number of users MM, the number of arms NN, and upper bound of delay d-{max}
dmax, we prove a regret of mathcal {O}(sqrt{TM{2}ln {N}(Nmathrm{e}+4d-{max})})O(T
M2lnN(Ne+4
dmax)). Furthermore, for the more common case of unknown TT, an adaptive algorithm named AMUD-EXP3 is proposed with a sublinear regret concerning TT. Finally, extensive experiments are conducted to indicate the correctness and effectiveness of our algorithms in dynamic environments.
AB - The multi-armed bandit (MAB) models have attracted significant research attention due to their applicability and effectiveness in various real-world scenarios such as resource allocation in uncertain environments, online advertising, and dynamic pricing. As an important branch, the adversarial multi-armed bandit problems with delayed feedback have been proposed and studied by many researchers recently where a conceptual adversary strategically selects the reward distributions associated with each arm to challenge the learning algorithm and the agent experiences a bunch of delays in receiving the corresponding reward feedback from different users after taking an action on them. However, the existing models restrict the feedback to being generated from only one user, which makes models inapplicable to the prevailing scenarios of multiple users (e.g., ad recommendation for a group of users). In this paper, we consider that the delayed feedback results are from multiple users and are unrestricted on internal distribution while the feedback delay is arbitrary and unknown to the player in advance. Also, for different users in a round, the delays in feedback have no assumption of latent correlation. Thus, we formulate an adversarial multi-armed bandit problem with multi-user delayed feedback and design a modified EXP3 algorithm named MUD-EXP3, which makes a decision at each round by considering the importance-weighted estimator of the received feedback from different users. On the premise of known terminal round index TT, the number of users MM, the number of arms NN, and upper bound of delay d-{max}
dmax, we prove a regret of mathcal {O}(sqrt{TM{2}ln {N}(Nmathrm{e}+4d-{max})})O(T
M2lnN(Ne+4
dmax)). Furthermore, for the more common case of unknown TT, an adaptive algorithm named AMUD-EXP3 is proposed with a sublinear regret concerning TT. Finally, extensive experiments are conducted to indicate the correctness and effectiveness of our algorithms in dynamic environments.
KW - Adversarial bandit
KW - multi-user delayed feedback
KW - EXP3
KW - regret analysis
KW - online learning
KW - applications
UR - http://www.scopus.com/inward/record.url?scp=85184824551&partnerID=8YFLogxK
U2 - 10.1109/TMC.2024.3362237
DO - 10.1109/TMC.2024.3362237
M3 - Journal article
AN - SCOPUS:85184824551
SN - 1536-1233
VL - 23
SP - 9383
EP - 9397
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 10
ER -