Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Neural and Evolutionary Computing

Improving Neural Network Performance with Non-Causal Terms

Improving Neural Network Performance with Non-Causal Terms

In this article, we propose a new method called S-TLLR (Sparse Temporal Least Squares with Non-Causal Regularization) that improves the performance of deep neural networks by reducing memory complexity. Traditional learning rules require a large amount of memory to store the eligibility trace, which slows down the training process and limits the use of deep neural networks in some applications. S-TLLR addresses this issue by dropping the recurrent term and considering only the instantaneous term in the learning rule, resulting in a low-memory complexity of O(n).
The proposed method is based on sparse temporal least squares (STLLR), which computes the eligibility trace as a function of the current input and output. By adding non-causal regularization terms to the STLLR, we can improve the performance of the network without increasing the memory complexity. The non-causal term acts as a regularization term that allows better exploration of the weights space, leading to improved network performance.
We demonstrate the effectiveness of S-TLLR through experiments on various datasets and compare it with existing learning methods from the literature. Our results show that S-TLLR outperforms these methods in terms of both performance and memory complexity. We also provide ablation studies to support the improvements due to the non-causal terms.
In summary, S-TLLR is a low-memory approach to learning rates for deep neural networks that improves performance without increasing memory complexity. By leveraging sparse temporal least squares and non-causal regularization, we can demystify complex concepts in deep learning and make it more accessible to a wider range of applications.