In this article, we explore the convergence properties of two algorithms for solving linear systems: GMRES and CG (conjugate gradient). We analyze the iteration counts of these algorithms under different choices of the parameter τ, which determines how much the algorithm deflates the residual. Our findings reveal that the number of iterations is nearly identical in both weighted and unweighted cases, with the smallest iteration count achieved by the unweighted method.
We also observe that initially accelerating the convergence of the non-deflated algorithm leads to faster convergence, particularly where it is slowest. However, as more vectors are added to the deflation space, the deflated algorithms converge faster. In particular, adding 10 or 100 vectors to the deflation space reduces the iteration count by roughly 50-250 iterations.
To better understand these results, we represent the distribution of eigenvalues in Figure 3, which shows that the magnitudes of the largest eigenvalues are scattered randomly. This makes it challenging to select a value of τ for computing the deflation space using the formula in Definition 2. Nevertheless, we provide bounds for θ based on approximate conditions numbers, which can be used to guide the selection of τ.
In summary, our analysis reveals that the choice of τ has a significant impact on the convergence of GMRES and CG algorithms. By carefully selecting τ, we can significantly reduce the number of iterations required for convergence, making these algorithms more efficient and effective in solving linear systems.
Mathematics, Numerical Analysis