论文标题

通过视觉建模和显着性分析来预测和解释移动UI可易于使用

Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis

论文作者

Schoop, Eldon, Zhou, Xin, Li, Gang, Chen, Zhourong, Hartmann, Björn, Li, Yang

论文摘要

我们使用基于深度学习的方法来预测移动UI屏幕截图中的所选元素是否仅基于像素而不是以前的工作所需的视图层次结构而被视为可敲击。为了帮助设计师更好地理解模型预测并提供比单独预测更具可行的设计反馈,我们还使用ML可解释性技术来帮助解释模型的输出。我们使用XRAI突出显示输入屏幕截图中最强烈影响所选区域的Tappability预测的区域,并使用k-nearest邻居向数据集提供了最相似的移动UI,并且对可易受度感知的影响相反。

We use a deep learning based approach to predict whether a selected element in a mobile UI screenshot will be perceived by users as tappable, based on pixels only instead of view hierarchies required by previous work. To help designers better understand model predictions and to provide more actionable design feedback than predictions alone, we additionally use ML interpretability techniques to help explain the output of our model. We use XRAI to highlight areas in the input screenshot that most strongly influence the tappability prediction for the selected region, and use k-Nearest Neighbors to present the most similar mobile UIs from the dataset with opposing influences on tappability perception.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源