论文标题

SSR-GNNS:带有图神经网络的基于中风的草图表示

SSR-GNNs: Stroke-based Sketch Representation with Graph Neural Networks

论文作者

Cheng, Sheng, Ren, Yi, Yang, Yezhou

论文摘要

本文遵循认知研究来研究草图的图表表示,其中中风的信息(即草图的一部分)在边缘的顶点和触摸之间的信息中编码。最终的图表表示有助于对分类任务进行图形神经网络的培训,并实现与反对翻译和旋转攻击的最先进的准确性和鲁棒性,以及对图形顶点和拓扑的更强烈攻击,即,在没有诉诸的训练的情况下,对中风的修改和添加。先前对草图的研究,例如图形变压器,在顶点上编码中风的控制点,这并不是空间变换的不变。相反,我们使用控制点之间的成对距离来编码顶点和边缘以实现不变性。与现有的生成草图模型相比,我们的方法不依赖运行时统计推断。最后,所提出的表示能够产生与现有数据集分离的结构相似的新颖草图。

This paper follows cognitive studies to investigate a graph representation for sketches, where the information of strokes, i.e., parts of a sketch, are encoded on vertices and information of inter-stroke on edges. The resultant graph representation facilitates the training of a Graph Neural Networks for classification tasks, and achieves accuracy and robustness comparable to the state-of-the-art against translation and rotation attacks, as well as stronger attacks on graph vertices and topologies, i.e., modifications and addition of strokes, all without resorting to adversarial training. Prior studies on sketches, e.g., graph transformers, encode control points of stroke on vertices, which are not invariant to spatial transformations. In contrary, we encode vertices and edges using pairwise distances among control points to achieve invariance. Compared with existing generative sketch model for one-shot classification, our method does not rely on run-time statistical inference. Lastly, the proposed representation enables generation of novel sketches that are structurally similar to while separable from the existing dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源