In this article, we explore the use of computing resources at HPCE, IIT Madras, and grants from KLA and India’s National Supercomputing Mission to support a research work on translating sequential code to parallel code written in C++ and OpenACC. The authors acknowledge these sources of support in the context section of their article.
The article then references three other works that are relevant to the topic of translating sequential code to parallel code:
[1] K. Alsubhi et al. (2019) present a tool for translating sequential code to parallel code written in C++ and OpenACC. The authors demonstrate the effectiveness of their tool through experimental results on various benchmarks.
[2] Nibedita Behera et al. (2023) propose a new domain-specific language (DSL) called StarPlat for graph analytics. The authors discuss the design and implementation of StarPlat, including its features and performance evaluation.
[3] Ulrik Brandes (2001) presents a faster algorithm for computing betweenness centrality in complex networks. The author analyzes the time complexity of the algorithm and demonstrates its effectiveness through experimental results.
The article concludes by highlighting the importance of translating sequential code to parallel code to take advantage of the computational capabilities of modern computing hardware, such as graphics processing units (GPUs). The authors note that their work builds on previous research in this area and demonstrates the potential for improving the performance of sequential code through parallelization.
In summary, the article provides an overview of recent research on translating sequential code to parallel code written in C++ and OpenACC, with a focus on the use of computing resources at HPCE, IIT Madras, and grants from KLA and India’s National Supercomputing Mission. The authors reference several relevant works that demonstrate the importance of parallelizing sequential code to take advantage of modern computing hardware.