Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Machine Learning

Deep Learning for Constrained Optimization: A Survey

Deep Learning for Constrained Optimization: A Survey

In this paper, we propose a new framework for training neural networks to solve Integer Programming (IP) problems. Our approach leverages the power of gradient descent to optimize the model’s performance while satisfying constraints. We introduce the concept of a "surrogate gradient" which allows us to compute the gradient of the objective function with respect to the model parameters, even when the constraints are not convex.
To decode the model outputs, we use a greedy algorithm that ensures the capacity constraint is satisfied. However, it may not be obvious how to correct infeasible solutions, or it may be computationally expensive to do so. Therefore, we focus on a minimal decoding scheme that simply rounds the output to investigate performance under minimal post-processing.
We evaluate our framework using several baseline neural network models and report their performance on a set of IP instances. Our results show that our proposed framework outperforms the baseline models in terms of constraint satisfaction, with some instances showing significant improvements.
To select the best model, we propose a loss function that balances prediction optimality and constraint satisfaction. By adjusting the parameter μ, we can prioritize one over the other depending on the specific application. In practice, model selection will depend on the relative importance of these factors.
Overall, our framework provides a practical solution for training neural networks to solve IP problems while satisfying constraints. By leveraging gradient descent and a minimal decoding scheme, we can improve performance while avoiding infeasible solutions. This work has important implications for applications where constraint satisfaction is crucial, such as resource allocation or scheduling.