TY - JOUR
T1 - LinkFND
T2 - Simple Framework for False Negative Detection in Recommendation Tasks With Graph Contrastive Learning
AU - Kim, Sanghun
AU - Jang, Hyeryung
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - Self-supervised learning has been shown to be effective in various fields, proving its usefulness in contrastive learning. Recently, graph contrastive learning has shown state-of-the-art performance in the recommendation task. They created two views and learned node embeddings so that target nodes in the two views attract each other based on the target node, and non-target nodes in the two views repel each other. However, they overlooked the fact that false negatives can occur when negative pairs are repelled. It has been shown through various studies that false negatives in contrastive learning in various fields can have a negative impact on model training, but research on the impact of false negatives in link prediction tasks, such as recommendation tasks, where classes cannot be clearly defined, is still hardly explored. In this paper, we propose an approach to define false negatives in link prediction tasks and fully utilize them in learning. Learning by defining false negatives and removing them from negative pairs showed consistent improvements over existing graph contrastive learning on five benchmark datasets. In addition, we found through comprehensive experimental studies that learning by removing false negatives is of great advantage, especially for low-density datasets. On top of these advantages, our false negative detection and false negative elimination can be naturally integrated into any graph contrastive learning architecture.
AB - Self-supervised learning has been shown to be effective in various fields, proving its usefulness in contrastive learning. Recently, graph contrastive learning has shown state-of-the-art performance in the recommendation task. They created two views and learned node embeddings so that target nodes in the two views attract each other based on the target node, and non-target nodes in the two views repel each other. However, they overlooked the fact that false negatives can occur when negative pairs are repelled. It has been shown through various studies that false negatives in contrastive learning in various fields can have a negative impact on model training, but research on the impact of false negatives in link prediction tasks, such as recommendation tasks, where classes cannot be clearly defined, is still hardly explored. In this paper, we propose an approach to define false negatives in link prediction tasks and fully utilize them in learning. Learning by defining false negatives and removing them from negative pairs showed consistent improvements over existing graph contrastive learning on five benchmark datasets. In addition, we found through comprehensive experimental studies that learning by removing false negatives is of great advantage, especially for low-density datasets. On top of these advantages, our false negative detection and false negative elimination can be naturally integrated into any graph contrastive learning architecture.
KW - False negative
KW - graph contrastive learning
KW - recommendation tasks
KW - self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85181544444&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3345338
DO - 10.1109/ACCESS.2023.3345338
M3 - Article
AN - SCOPUS:85181544444
SN - 2169-3536
VL - 11
SP - 145308
EP - 145319
JO - IEEE Access
JF - IEEE Access
ER -