Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Information Retrieval

Enhancing Robustness via Structure Denoising and Embedding Perturbation: A Study on Neural Graph Collaborative Filtering

Enhancing Robustness via Structure Denoising and Embedding Perturbation: A Study on Neural Graph Collaborative Filtering

In recent years, contrastive learning (CL) has gained popularity in deep representation learning due to its ability to learn informative features and invariance from vast amounts of unlabeled data. This approach has been applied to various domains, including recommender systems, with proven effectiveness in enhancing recommendation quality. In this article, we explore the worst-case scenario of poisoning attacks on CL-based recommender systems and introduce CLeaR (Contrastive Learning for Recommender Systems), a white-box implementation of CL under the bi-level optimization framework.

CLeaR Overview in White-Box Setting

To understand the worst-case scenario, we provide a simple overview of CLeaR. The process involves the following steps:

  1. Inner Optimization: The first step is to obtain the representations of both users and items, which serves as the fundamental prior knowledge for subsequent steps. We train the recommendation model to obtain optimal representations by minimizing the loss function.
  2. Bi-Level Optimization: Once we have the inner optimization result, we perform bi-level optimization to optimize the rank promotion objective and the visibility of target items. The goal is to maximize the exposure of the target items to an extensive user base while optimizing the model’s generalization ability.
  3. Poisoning Attacks: In the worst-case scenario, poisoning attacks can occur, aiming to manipulate the model’s predictions by adding malicious items or manipulating the user behavior. We propose CLeaR to mitigate these attacks by incorporating robustness measures into the bi-level optimization framework.

Robustness Measures

To improve the robustness of CLeaR, we introduce several measures:

  1. Item Representations: We use item representations to detect and filter out malicious items.
  2. User Representations: We use user representations to detect and filter out manipulated user behavior.
  3. Loss Function Modification: We modify the loss function to incorporate robustness constraints, ensuring that the model is not significantly affected by poisoning attacks.

Conclusion

In conclusion, CLeaR is a white-box implementation of CL for recommender systems that addresses the worst-case scenario of poisoning attacks. By incorporating robustness measures into the bi-level optimization framework, we mitigate these attacks and enhance the model’s generalization ability. With CLeaR, we can ensure that recommendation quality is not compromised by malicious attacks, providing a more reliable and secure recommendation system for users.