In this research paper, the authors investigate the problem of catastrophic forgetting in neural networks, which occurs when a model trained on one task fails to perform well on another related task. They propose a new method called Continual Learning with Dynamic Classifiers (CL-DC), which helps the model remember previous tasks while learning new ones.
The authors explain that traditional methods for avoiding catastrophic forgetting, such as adding a regularization term to the loss function, can be ineffective because they do not take into account the changing nature of the data. Instead, CL-DC uses a dynamic buffer of classifiers to store information from previous tasks and adapt it to new ones. This allows the model to learn new tasks while preserving the knowledge gained from previous tasks.
The authors demonstrate the effectiveness of their method by applying it to six state-of-the-art online classification methods, including ER, DER++, ER-ACE, OCM, GSA, and OnPro. They show that CL-DC significantly outperforms these methods on a variety of benchmark datasets.
The authors also provide an intuitive explanation for why their method works by using the concept of "forgetting curves." They show that when a model is trained on a new task, it will initially perform well but then decay over time as the old information is forgotten. By using a dynamic buffer of classifiers, CL-DC can suppress this decay and preserve the old information, leading to better performance on both old and new tasks.
In summary, the authors propose a new method for continual learning called CL-DC that helps neural networks remember previous tasks while learning new ones. They demonstrate its effectiveness through experiments on several benchmark datasets and provide an intuitive explanation for why it works by using forgetting curves.
Computer Science, Machine Learning