In this article, we explore the limitations of generative models, specifically diffusion models, and their vulnerability to corruptions. We conducted a series of experiments using different levels of corruption to assess how far these models’ performance can degrade. Our findings reveal three key conclusions:
Conclusion I: Diffusion models are sensitive to corruptions, and their performance degrades significantly as the level of corruption increases.
Metaphor: Imagine a delicate flower blooming in a greenhouse. The model’s performance is like the flower’s beauty – it can be easily damaged by even a small amount of pollution (corruption).
Conclusion II: We discovered that the severity (λ) of the corruption has a significant impact on the model’s degeneration, and we identified a specific threshold (δ) beyond which the model’s performance drops significantly.
Analogy: Think of λ as the amount of water in a bucket, and δ as the size of a small hole at the bottom. As the water level increases (λ grows), the hole becomes bigger, leading to a rapid decrease in the bucket’s capacity (model’s performance).
Conclusion III: We propose a new metric, FID-C, which takes into account both corrupted and novelty aspects of the model’s performance. This metric is crucial for evaluating generative models’ effectiveness in the presence of corruptions.
Simile: Imagine a chef tasting a dish and grading it based on its flavor (novelty) and texture (corrupted). Our proposed metric is like a multi-dimensional rubric that helps evaluate the dish’s overall quality, including both desirable and undesirable traits.
In conclusion, our study sheds light on the vulnerabilities of diffusion models and highlights the need for a new evaluation metric tailored to assess both corrupted and novel aspects of generative models’ performance. By understanding these limitations, we can improve the robustness of these models and develop more reliable generative AI systems.
Computer Science, Machine Learning