In recent years, contrastive learning (CL) has gained popularity in deep representation learning due to its ability to learn informative features and invariance from vast amounts of unlabeled data. This approach has been applied to various domains, including recommender systems, with proven effectiveness in enhancing recommendation quality. In this article, we explore the worst-case scenario of poisoning attacks on CL-based recommender systems and introduce CLeaR (Contrastive Learning for Recommender Systems), a white-box implementation of CL under the bi-level optimization framework.
CLeaR Overview in White-Box Setting
To understand the worst-case scenario, we provide a simple overview of CLeaR. The process involves the following steps:
- Inner Optimization: The first step is to obtain the representations of both users and items, which serves as the fundamental prior knowledge for subsequent steps. We train the recommendation model to obtain optimal representations by minimizing the loss function.
- Bi-Level Optimization: Once we have the inner optimization result, we perform bi-level optimization to optimize the rank promotion objective and the visibility of target items. The goal is to maximize the exposure of the target items to an extensive user base while optimizing the model’s generalization ability.
- Poisoning Attacks: In the worst-case scenario, poisoning attacks can occur, aiming to manipulate the model’s predictions by adding malicious items or manipulating the user behavior. We propose CLeaR to mitigate these attacks by incorporating robustness measures into the bi-level optimization framework.
Robustness Measures
To improve the robustness of CLeaR, we introduce several measures:
- Item Representations: We use item representations to detect and filter out malicious items.
- User Representations: We use user representations to detect and filter out manipulated user behavior.
- Loss Function Modification: We modify the loss function to incorporate robustness constraints, ensuring that the model is not significantly affected by poisoning attacks.
Conclusion
In conclusion, CLeaR is a white-box implementation of CL for recommender systems that addresses the worst-case scenario of poisoning attacks. By incorporating robustness measures into the bi-level optimization framework, we mitigate these attacks and enhance the model’s generalization ability. With CLeaR, we can ensure that recommendation quality is not compromised by malicious attacks, providing a more reliable and secure recommendation system for users.