In this article, we explore a novel approach to addressing the challenge of test-time adaptation in machine learning models. Traditional methods rely on adapting the model itself or the input data, but these approaches have their limitations. Our proposed method leverages diffusion models to generate pseudo-labels for self-training, effectively amplifying the model’s generalization capabilities without overfitting. By using diffusion models trained on source data, we can create more accurate pseudo-labels and adapt the model more effectively during test time.
We demonstrate the superiority of our approach through experiments on CIFAR-10C, outperforming the strongest baseline by an average of 1.7% across 15 diverse corruptions and surpassing the strongest input adaptation baseline by an average of 18%. Our method is able to effectively adapt to various types of corruptions, including additive noise, mixed corruptions, and label smoothing.
Our approach is similar to a radio broadcasting station that uses diffusion models to generate pseudo-labels for programmers to create a more accurate and diverse playlist. The diffusion models are trained on a large dataset of programs, and the pseudo-labels are used to adapt the playlist during broadcast time based on the listener’s preferences.
In summary, our proposed method leverages diffusion models to generate pseudo-labels for self-training, enabling more effective test-time adaptation without overfitting. By using diffusion models trained on source data, we can create more accurate pseudo-labels and adapt the model more effectively during test time, resulting in improved performance across various types of corruptions.
Computer Science, Computer Vision and Pattern Recognition