国際ジャーナル 「ACM Transactions on Information Systems」に1件採択!
国際ジャーナル「ACM Transactions on Information Systems」に下記の論文が採択されました.
Shaowen Peng, Kazunari Sugiyama, Tsunenori Mine Less is More: Removing Redundancy of Graph Convolutional Networks for Recommendation Journal Article In: Transactions on Information Systems, vol. 42, iss. 3, no. 85, pp. 1-26, 2024.@article{transactions2024-peng,
title = {Less is More: Removing Redundancy of Graph Convolutional Networks for Recommendation},
author = {Shaowen Peng, Kazunari Sugiyama, Tsunenori Mine},
doi = {10.1145/3632751},
year = {2024},
date = {2024-01-22},
urldate = {2024-01-22},
journal = {Transactions on Information Systems},
volume = {42},
number = {85},
issue = {3},
pages = {1-26},
abstract = {While Graph Convolutional Networks (GCNs) have shown great potential in recommender systems and collaborative filtering (CF), they suffer from expensive computational complexity and poor scalability. On top of that, recent works mostly combine GCNs with other advanced algorithms which further sacrifice model efficiency and scalability. In this work, we unveil the redundancy of existing GCN-based methods in three aspects: (1) Feature redundancy. By reviewing GCNs from a spectral perspective, we show that most spectral graph features are noisy for recommendation, while stacking graph convolution layers can suppress but cannot completely remove the noisy features, which we mostly summarize from our previous work; (2) Structure redundancy. By providing a deep insight into how user/item representations are generated, we show that what makes them distinctive lies in the spectral graph features, while the core idea of GCNs (i.e., neighborhood aggregation) is not the reason making GCNs effective; and (3) Distribution redundancy. Following observations from (1), we further show that the number of required spectral features is closely related to the spectral distribution, where important information tends to be concentrated in more (fewer) spectral features on a flatter (sharper) distribution. To make important information be concentrated in as few features as possible, we sharpen the spectral distribution by increasing the node similarity without changing the original data, thereby reducing the computational cost. To remove these three kinds of redundancies, we propose a Simplified Graph Denoising Encoder (SGDE) only exploiting the top-K singular vectors without explicitly aggregating neighborhood, which significantly reduces the complexity of GCN-based methods. We further propose a scalable contrastive learning framework to alleviate data sparsity and to boost model robustness and generalization, leading to significant improvement. Extensive experiments on three real-world datasets show that our proposed SGDE not only achieves state-of-the-art but also shows higher scalability and efficiency than our previously proposed GDE as well as traditional and GCN-based CF methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}