Tri-Directional Contrastive Learning

I'm Me, We're Us, and I'm Us: Tri-directional Contrastive Learning on Hypergraphs github

This repository contains the source code for the paper I'm Me, We're Us, and I'm Us: Tri-directional Contrastive Learning on Hypergraphs, by Dongjin Lee and Kijung Shin, presented at AAAI 2023. In this paper, we propose TriCL (Tri-directional Contrastive Learning), a general framework for contrastive learning on hypergraphs. Its main idea is tri-directional contrast, and specifically, it aims to maximize in two augmented views the agreement (a) between the same node, (b) between the same group of nodes, and (c) between each group and its members. Together with simple but surprisingly effective data augmentation and negative sampling schemes, these three forms of contrast enable TriCL to capture both microscopic and mesoscopic structural information in node embeddings. Our extensive experiments using 14 baseline approaches, 10 datasets, and two tasks demonstrate the effectiveness of TriCL, and most noticeably, TriCL almost consistently outperforms not just unsupervised competitors but also (semi-)supervised competitors mostly by significant margins for node classification.