Results

Seminar Talk Series - 5

12 January 2024

}

Topic:

S-Step and Flexible Enlarged Krylov Subspace Conjugate Gradient Methods

Presenter:

Dr. Sophie Moufawad – Assistant Professor of Mathematics at the American University of Beirut

Date and Time:

Friday, Jan 12, 2024 at 9:00 AM

Location of event:

Online

Zoom:

https://zoom.us/j/96417417775?pwd=bEd3WmFsNlZyWGh2WXpBTVd6MUhDZz09

Biography:

Sophie M. Moufawad is an Assistant Professor of Mathematics at the American University of Beirut. She received her B.S. in Mathematics and M.S. in Computational Science from the American University of Beirut (AUB) in 2009 and 2011 respectively, and Ph.D. in Applied Mathematics from “Sorbonne Universités”, France in 2014. She did postdoctoral research at IFP Energies Nouvelles till June 2016, then joined the Mathematics Department at AUB where she regularly teaches courses on ordinary differential equations, partial differential equations, numerical computing, and numerical linear algebra. She is an academic advisor in the Math department and served on several departmental, faculty and university committees. Her research interests are in Numerical Analysis, Applied Linear Algebra, Parallel Programming, High Performance Computing, Numerical solutions of PDE's: Direct Problems and Inverse Problems.

Abstract:

In [1], a new approach for reducing communication in Krylov subspace methods was introduced. It consists of enlarging the Krylov subspace by a maximum of t vectors per iteration, based on a domain decomposition of the graph of A. Therefore, the approximate solutions of the system Ax = b are sought from the enlarged Krylov subspace, which is a superset of the Krylov subspace. Several enlarged conjugate gradient versions that converge faster than CG in terms of iterations were introduced, such as MSDO-CG and SRE-CG. To further speedup the parallel runtime, the s-step enlarged CG versions [2] are presented, whereby s iterations of the enlarged CG methods are merged into one iteration, by performing denser operations that require less communications when parallelized. The s-step enlarged CG methods, similarly to the enlarged CG methods, converge faster than classical CG in terms of iterations, but require much more memory per iteration. Thus, we explore different options for reducing the memory requirements of these enlarged CG methods, without affecting much their convergence. This leads to the flexible enlarged CG versions [3], where at some iteration the maximum number of introduced basis vectors is halved. Convergence results are presented.

References

[1] L. Grigori and S. Moufawad and F. Nataf. Enlarged Krylov Subspace Conjugate Gradient methods for Reducing Communication. SIAM Journal on Matrix Analysis and Applications, 37:2, pp. 744-773, 2016.

[2] S. Moufawad. S-Step Enlarged Krylov Subspace Conjugate Gradient methods, SIAM Journal on Scienti_c Computing, 42:1, pp. A187-A219, 2020.

[3] S. Moufawad. Flexible Enlarged Krylov Subspace Conjugate Gradient Methods, Arxiv preprint, arXiv:2305.19013, 2023.