Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Information Retrieval

Self-Supervised Learning: Generative or Contrastive?

Self-Supervised Learning: Generative or Contrastive?

In this section, we describe the experimental settings and results for our study on graph contrastive learning (GCL) for recommendation systems. We used six real-world datasets with diverse characteristics to evaluate the performance of GCL models. Our experiments were conducted with 13 baseline models, which are compared to our model in different scenarios.
Datasets and Baselines
We worked with six datasets: Yelp, Gowalla, Amazon Books, Amazon Electronics, Amazon CDs, and Tmall. These datasets contain various types of data, such as reviews, check-ins, books, electronics, and music. We compared our model with 13 baseline models that have diverse technical characteristics:

  • Graph-based CFs (LightGCN, LT-OCF, HM-LET, GF-CF, BSPM)
  • Graph CL methods for other tasks (SimGRACE, GCA)
  • Hyepergraph-based CFs (HCCF, SHT)
  • Graph CL methods for CF (SGL, SimGCL, XSimGCL, LightGCL)
    Experimental Environments
    We used Ubuntu 18.04 LTS, PyTorch 1.9.0, torch-iffeq 0.2.2, CUDA 11.3, i9 CPU, and RTX 3090 for all experiments. We set the following parameters as the best configuration for each dataset:

* Yelp: 𝐾 = 2, 𝑇 = 2, 𝛼 = 0.6, 𝜏 = 0.1, and 𝜆1 = 0.3

* Gowalla: 𝐾 = 2, 𝑇 = 2, 𝛼 = 0.2, 𝜏 = 0.4, and 𝜆1 = 0.5

  • Amazon-Books: 𝐾 = 2, 𝑇 = 2, 𝛼 = 0.8, 𝜏 = 0.1, and 𝜆1 = 0.2
  • Amazon-Electronics: 𝐾 = 2, 𝑇 = 2, 𝛼 = 0.2, 𝜏 = 1.0, and 𝜆1 = 0.2
  • Amazon-CDs: 𝐾 = 2, 𝑇 = 2, 𝛼 = 0.1, 𝜏 = 0.2, and 𝜆1 = 0.2

* Tmall: 𝐾 = 2, 𝑇 = 2, 𝛼 = 0.6, 𝜏 = 0.2, and 𝜆1 = 0.5

Results
Our experiments showed that GCL outperformed the baseline models in all datasets, demonstrating its effectiveness in recommendation tasks. We observed a consistent improvement in performance across different datasets, indicating the generalizability of our approach. The results also revealed that the choice of parameters significantly impacted the performance of GCL, highlighting the importance of careful tuning for optimal performance.
In conclusion, our experiments demonstrated the superiority of GCL over baseline models and its ability to adapt to diverse datasets. We showed that GCL can be used to improve recommendation systems by leveraging the power of contrastive learning. The results provide a strong foundation for further research into GCL and its applications in recommendation systems.