论文标题
快速问题:通过增强学习中断微型用户的用户
Quick Question: Interrupting Users for Microtasks with Reinforcement Learning
论文作者
论文摘要
人类注意力是现代计算中的稀缺资源。许多微型掩体都涉及用户对众包信息的关注,执行暂时的评估,个性化服务以及单一触摸执行操作。当这些任务占据了当天的无形时刻时,要完成很多工作。但是,在不适当时间的中断会降低生产力并引起烦恼。先前的工作已经利用了上下文提示和行为数据,以确定成功的微型掩体的中断性。有了快速的问题,我们探索了使用加固学习(RL)来安排微型掩体的使用,同时最大程度地减少用户烦恼并将其绩效与监督学习进行比较。我们将问题建模为马尔可夫决策过程,并使用优势行为者评论家算法,以根据用户交互的上下文和历史来识别可中断的时刻。在我们为期5周的30个参与研究中,我们将提出的RL算法与监督学习方法进行了比较。尽管两种方法之间的平均响应数量相称,但RL在避免驳回通知并随着时间的推移改善用户体验更有效。
Human attention is a scarce resource in modern computing. A multitude of microtasks vie for user attention to crowdsource information, perform momentary assessments, personalize services, and execute actions with a single touch. A lot gets done when these tasks take up the invisible free moments of the day. However, an interruption at an inappropriate time degrades productivity and causes annoyance. Prior works have exploited contextual cues and behavioral data to identify interruptibility for microtasks with much success. With Quick Question, we explore use of reinforcement learning (RL) to schedule microtasks while minimizing user annoyance and compare its performance with supervised learning. We model the problem as a Markov decision process and use Advantage Actor Critic algorithm to identify interruptible moments based on context and history of user interactions. In our 5-week, 30-participant study, we compare the proposed RL algorithm against supervised learning methods. While the mean number of responses between both methods is commensurate, RL is more effective at avoiding dismissal of notifications and improves user experience over time.