In this article, we explore the idea of using deterministic learning to improve the efficiency of Memristor-based inference. Traditional methods of training Memristor models involve passing through a large number of samples, which can be computationally expensive and may not always lead to accurate results. Deterministic learning, on the other hand, allows us to train Memristor models in a more targeted manner, resulting in faster and more accurate inference.
Thinning Strategy
To achieve deterministic learning, we incorporate an online thinning strategy. This means that after accepting a model sample at index n, the algorithm will only proceed to generate new models upon each successive mini-batch of data at the same sample index n. Unlike other methods, we do not move directly to the next sample index, but instead leave the current sample encoded by a collection of Memristors that can be used later in inference.
Downcounter
To ensure that only the most recent samples are used for inference, we implement a downcounter. The downcounter is set to an initial value and then decremented by one after each mini-batch. When the downcounter reaches zero, the most recently proposed model is stored definitively at the corresponding sample index, and the algorithm proceeds to generate new models.
Inference
The inference process involves computing the average softmax distribution of all stored samples and assigning a classification based on the peak probability. We use the same set of hyperparameters for both the domain-wall Memristor-based and floating-point models during training.
Conclusion
In conclusion, deterministic learning offers a promising approach to improving the efficiency of Memristor-based inference. By targeting the most recent samples and only using the most relevant information, we can significantly reduce the computational cost without compromising accuracy. Our proposed method has important implications for real-world applications where speed and accuracy are crucial, such as image recognition and natural language processing.