Experiences of the GPU Thread Configuration and Shared Memory

##plugins.themes.bootstrap3.article.main##

  •   DaeHwan Kim

Abstract

Nowadays, GPU processors are widely used for general-purpose parallel computation applications. In the GPU programming, thread and block configuration is one of the most important decisions to be made, which increases parallelism and hides instruction latency. However, in many cases, it is often difficult to have sufficient parallelism to hide all the latencies, where the high latencies are often caused by the global memory accesses. In order to reduce the number of  those accesses, the shared memory is instead used which is  much faster than the global memory being located on a chip. The performance of the proposed thread configuration is evaluated on the GPU 960 processor. The experimental result shows that the best configuration improves the performance by 7.3 times compared to the worst configuration in the experiment. The experiences are also discussed for the shared memory performance when compared to that of the global memory.


Keywords: GPU, Performance, Thread, Shared Memory

References

E. S. Larsen, D. McAllister, “Fast matrix multiplies using graphics hardware,” in Proceedings of Supercomputing 2001, Denver, CO, 2001.

S. Mittal, and J. S. Vetter. “A survey of cpu-gpu heterogeneous computing techniques,” ACM Computing Survey, 47(4), pp.1–35, July 2015.

J. D. Owens, D. Luebke, N. Govindaraju, M. Harris, J. Kruger, A. E. Lefohn, and T. J. Purcell, “A survey of general-purpose computation on graphics hardware,” in Proceedings of European Association for Computer Graphics, pp. 21–51, 2005.

J. D. Owens, D. Luebke, N. Govindaraju, M. Harris, J. Kruger, A. E. Lefohn, and T. J. Purcell, “A Survey of General-Purpose Computation on Graphics Hardware,” in Computer Graphics Forum, Volume 26, number 1, pp. 80-113, 2007

NVIDA, https://www.nvidia.com/content/gpu-applications/PDF/gpu-applications-catalog.pdf

NVIDIA, CUDA C Programming Guide 8.0, 2017.

NVIDIA, CUDA C Best Practices Guide 8.0, 2017.

A. Munshi, The OpenCL Specification, Khronos OpenCL Working Group, version: 1.0, Document Revision:48, 2009.

D. Kirk, and W. W. Hwu, University of Illinois, “Programming Massively Parallel Processors,” Urbana-Champaign, 2010.

NVIDIA, https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-960/specifications

NVIDIA, https://www.nvidia.com/en-us/geforce/ products/10series/titan-x-pascal/

Downloads

Download data is not yet available.

##plugins.themes.bootstrap3.article.details##

How to Cite
[1]
Kim, D. 2018. Experiences of the GPU Thread Configuration and Shared Memory. European Journal of Engineering and Technology Research. 3, 7 (Jul. 2018), 12–15. DOI:https://doi.org/10.24018/ejeng.2018.3.7.788.