In this paper, the authors propose a new approach to address the stability-plasticity dilemma in continual reinforcement learning (RL). The dilemma arises when an agent must balance between preserving previous knowledge and adapting to new situations. To overcome this challenge, the authors decompose the value function estimated by the agent into two components: a permanent component that accumulates general knowledge over time, and a transient component that quickly learns local nuances but forgets eventually.
The permanent component, called V(P), is designed to slowly acquire knowledge from the entire distribution of information to which the agent is exposed over time. This resembles how our brain stores memories, gradually refining them as we encounter new experiences. The transient component, called V(T), quickly learns local nuances and forgets them after some time, much like how we learn new skills and adapt to changing situations.
The authors use an additive decomposition to compute the overall value function, V(P+T), which combines the permanent and transient components. This allows the agent to balance between preserving previous knowledge and adapting to new situations. The authors also propose a novel way of defining the action-value function, Q(P+T), which takes into account both the permanent and transient components.
The proposed approach is simple, online, and model-free, making it applicable to a wide range of RL problems. Moreover, it can be combined with other advancements in the field, such as designing new optimizers or using experience replay. The authors demonstrate the effectiveness of their approach through experiments on deep reinforcement learning tasks.
In summary, this paper presents an innovative solution to the stability-plasticity dilemma in continual RL by decomposing the value function into two components. This allows the agent to strike a balance between preserving previous knowledge and adapting to new situations, leading to improved learning performance. The proposed approach is simple, online, and model-free, making it versatile and applicable to various RL problems.
Computer Science, Machine Learning