Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Machine Learning

Camouflaged Poisoning Attack on Graph Neural Networks

Camouflaged Poisoning Attack on Graph Neural Networks

In this article, the authors discuss the evaluation of link prediction models for graph neural networks (GNNs). They present four benchmark datasets for evaluating GNNs, including Cora, CiteSeer, CS, and Physics. The authors explain that each dataset contains nodes and edges, where nodes represent papers, edges represent citation relations, and node features are word vectors. They also mention that the authors use link prediction models to predict missing links in the graphs.
To evaluate the performance of the GNNs, the authors use the average number of triggered nodes (ATN) and the average number of correctly predicted links (ACPL). They explain that ATN measures the average number of nodes that are affected by the trigger node, while ACPL measures the average number of correctly predicted links. The authors then propose a new evaluation metric called the "link prediction gain" (LPG), which measures the difference between the predicted and actual numbers of links.
The authors also discuss the importance of selecting an appropriate value for the parameter k, which determines the number of positions with the smallest frequency of "1" in the features of all nodes. They explain that if the trigger node carries a large number of features, setting k to 1% of the total number of feature dimensions can make the trigger node more unnoticeable.
Overall, this article provides insights into evaluating link prediction models for graph neural networks and highlights the importance of selecting an appropriate value for the parameter k in order to avoid detecting trigger nodes that carry a large number of features. By using everyday language and engaging metaphors or analogies, the author demystifies complex concepts and makes the article accessible to a wide range of readers.