In this article, we present a new algorithm for clustering data that can handle a small amount of adversarial corruption in the samples. The algorithm is designed to accurately refine the ground truth clustering of the samples, even when the underlying mixture satisfies information-theoretically optimal separation. We ensure that the output clusters enjoy a mean separation guarantee similar to the one at the distributional level, and the sets are 90% close to a refinement of the ground truth clustering.
To understand how our algorithm works, imagine you have a bag of mixed candy, where each piece of candy represents a sample in the dataset. The candy is mixed together in different proportions, representing the different clusters in the data. Our algorithm takes this bag of candy and separates it into distinct groups, or clusters, based on their similarities and differences. We ensure that these clusters are accurate refinements of the ground truth clustering, even when some of the candy pieces are intentionally switched around by an adversary.
The key to our algorithm’s success is its ability to handle a small amount of adversarial corruption. This means that an attacker can only inspect and edit a small fraction of the input samples, but this does not affect the accuracy of the clustering. We achieve this through a mean separation guarantee that is similar to the one at the distributional level, which ensures that the clusters are well-separated even after considering the adversarial corruption.
In summary, our algorithm provides an accurate refinement of the ground truth clustering of the samples, even in the presence of a small amount of adversarial corruption. By leveraging a mean separation guarantee similar to the one at the distributional level, we ensure that the clusters are well-separated and accurately represented, making our algorithm a valuable tool for handling adversarial attacks in clustering tasks.
Computer Science, Machine Learning