05 CUDA-Akash Chhetri

download 05 CUDA-Akash Chhetri

of 23

Transcript of 05 CUDA-Akash Chhetri

Seminar report onNVIDIA CUDA TechnologySubmitted to the faculty of GOVIND BALLABH PANT ENGINEERING COLLEGE, PAURI-GARHWAL. In partial fulfillment of requirements for the degree of Bachelor of Engineering In Computer Science And Engineering

Submitted By: AKASH CHHETRI Roll No. 5 (2008-2012Batch) Department Of Computer Science and Engineering GOVIND BALLABH PANT ENGINEERING COLLEGE

TABLE OF CONTENTS

Introduction Background Supported GPUs Version features and specifications Advantages Limitations Language bindings Current CUDA architectures Current and future usages of CUDA Reference

IntroductionCUDA or Compute Unified Device Architecture is a parallel computing architecture developed by Nvidia. CUDA is the computing engine in Nvidia graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages. Programmers use 'C for CUDA' (C with Nvidia extensions and certain restrictions), compiled through a PathScale Open64 C compiler, to code algorithms for execution on the GPU. CUDA architecture shares a range of computational interfaces with two competitors -the Khronos Group's OpenCL and Microsoft's DirectCompute.Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Haskell, MATLAB, and IDL, and native support exists in Mathematica. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs. Using CUDA, the latest Nvidia GPUs become accessible for computation like CPUs. Unlike CPUs however, GPUs have a parallel throughput architecture that emphasizes executing many concurrent threads slowly, rather than executing a single thread very quickly. This approach of solving general purpose problems on GPUs is known as GPGPU. In the computer game industry, in addition to graphics rendering, GPUs are used in game physics calculations (physical effects like debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more.An example of this is the BOINC distributed computing client. CUDA provides both a low level API and a higher level API. The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0, which supersedes the beta released February 14, 2008.

CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. CUDA is compatible with most standard operating systems. Nvidia states that programs developed for the G8x series will also work without modification on all future Nvidia video cards, due to binary compatibility.

Example of CUDA processing flow 1. Copy data from main mem to GPU mem 2. CPU instructs the process to GPU 3. GPU execute parallel in each core 4. Copy the result from GPU mem to main mem The CUDA software stack.: A hardware driver, An application programming interface (API) and its runtime, 2 higher-level mathematical libraries of common usage, CUFFT and CUBLAS that are both described in separate documents. The hardware has been designed to support lightweight driver and runtime layers, resulting in high performance.

A thread has access to the devices DRAM and on-chip memory through a set of memory spaces of various scopes.

CUDA provides general DRAM memory addressing More programming flexibility: Scatter Gather memory operations. From a programming perspective, this translates into the ability to read and write data at any location in DRAM, just like on a CPU.

BackgroundThe GPU, as a specialized processor, addresses the demands of realtime high-resolution 3D graphics compute-intensive tasks. As of 2011 GPUs have evolved into highly parallel multi core systems allowing very efficient manipulation of large blocks of data. This design is more effective than general-purpose CPUs for algorithms where processing of large blocks of data is done in parallel, such as:

push-relabel maximum flow algorithm fast sort algorithms of large lists two-dimensional fast wavelet transform

For instance, the parallel nature of molecular dynamics simulations is suitable for CUDA implementation[citation needed].

Supported GPUsCompute capability table (version of CUDA supported) by GPU and card. Also available directly from NvidiaCompute capability (version) GPUs Cards

1.0

G80, G92, G92b, G94, G94b

GeForce 8800GTX/Ultra, 9400GT, 9600GT, 9800GT, Tesla C/D/S870, FX4/5600, 360M, GT 420

1.1

G86, G84, G98, GeForce 8400GS/GT, 8600GT/GTS, 8800GT/GTS, 9600 GSO, 9800GTX/GX2, GTS G96, G96b, G94, 250, GT 120/30/40, FX 4/570, 3/580, 17/18/3700, 4700x2, 1xxM, 32/370M, G94b, G92, G92b 3/5/770M, 16/17/27/28/36/37/3800M, NVS420/50 GT218, GT216, GT215

1.2

GeForce 210, GT 220/40, FX380 LP, 1800M, 370/380M, NVS 2/3100M

1.3

GT200, GT200b

GeForce GTX 260, GTX 275, GTX 280, GTX 285, GTX 295, Tesla C/M1060, S1070, Quadro CX, FX 3/4/5800 GeForce (GF100) GTX 465, GTX 470, GTX 480, Tesla C2050, C2070, S/M2050/70, Quadro Plex 7000, GeForce (GF110) GTX570, GTX580, GTX590

2.0

GF100, GF110

2.1

GF104, GF114, GF116, GF108, GF106

GeForce GT 430, GT 440, GTS 450, GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti, 500M, Quadro 600, 2000, 4000, 5000, 6000

A table of devices officially supporting CUDA (Note that many applications require at least 256 MB of dedicated VRAM, and some recommend at least 96 cuda cores).

Nvidia GeForce GeForce GTX 590 GeForce GTX 580 GeForce GTX 570 GeForce GTX 560 Ti GeForce GTX 560 GeForce GTX 550 Ti GeForce GTX 480 GeForce GTX 470 GeForce GTX 465 GeForce GTX 460 GeForce GTX 460 SE GeForce GTS 450 GeForce GT 440 GeForce GT 430 GeForce GT 420 GeForce GTX 295 GeForce GTX 285 GeForce GTX 280 GeForce GTX 275 GeForce GTX 260 GeForce GTS 250 GeForce GTS 240 Nvidia GeForce Mobile GeForce GTX 580M GeForce GTX 570M GeForce GTX 560M GeForce GT 555M GeForce GT 550M GeForce GT 540M GeForce GT 525M GeForce GT 520M GeForce GTX 480M GeForce GTX 470M GeForce GTX 460M GeForce GT 445M GeForce GT 435M GeForce GT 425M GeForce GT 420M GeForce GT 415M GeForce GTX 285M GeForce GTX 280M GeForce GTX 260M GeForce GTS 360M GeForce GTS 350M

Nvidia Quadro Quadro 6000 Quadro 5000 Quadro 4000 Quadro 2000 Quadro 600 Quadro FX 5800 Quadro FX 5600 Quadro FX 4800 Quadro FX 4700 X2 Quadro FX 4600 Quadro FX 3800 Quadro FX 3700 Quadro FX 1800 Quadro FX 1700 Quadro FX 580 Quadro FX 570 Quadro FX 380 Quadro FX 370 Quadro NVS 450 Quadro NVS 420 Quadro NVS 295 Quadro NVS 290

GeForce GT 240 GeForce GT 220 GeForce 210/G210 GeForce GT 140 GeForce 9800 GX2 GeForce 9800 GTX+ GeForce 9800 GTX GeForce 9800 GT GeForce 9600 GSO GeForce 9600 GT GeForce 9500 GT GeForce 9400 GT GeForce 9400 mGPU GeForce 9300 mGPU GeForce 9100 mGPU GeForce 8800 Ultra GeForce 8800 GTX GeForce 8800 GTS GeForce 8800 GT GeForce 8800 GS GeForce 8600 GTS GeForce 8600 GT GeForce 8600 mGT

GeForce GTS 260M GeForce GTS 250M GeForce GT 335M GeForce GT 330M GeForce GT 325M GeForce GT 320M GeForce 310M GeForce GT 240M GeForce GT 230M GeForce GT 220M GeForce G210M GeForce GTS 160M GeForce GTS 150M GeForce GT 130M GeForce GT 120M GeForce G110M GeForce G105M GeForce G103M GeForce G102M GeForce G100 GeForce 9800M GTX GeForce 9800M GTS GeForce 9800M GT

Quadro Plex 1000 Model IV Quadro Plex 1000 Model S4 Nvidia Quadro Mobile Quadro 5010M Quadro 5000M Quadro 4000M Quadro 3000M Quadro 2000M Quadro 1000M Quadro FX 3800M Quadro FX 3700M Quadro FX 3600M Quadro FX 2800M Quadro FX 2700M Quadro FX 1800M Quadro FX 1700M Quadro FX 1600M Quadro FX 880M Quadro FX 770M Quadro FX 570M Quadro FX 380M Quadro FX 370M Quadro FX 360M

GeForce 8500 GT GeForce 8400 GS GeForce 8300 mGPU GeForce 8200 mGPU GeForce 8100 mGPU

GeForce 9800M GS GeForce 9700M GTS GeForce 9700M GT GeForce 9650M GT GeForce 9650M GS GeForce 9600M GT GeForce 9600M GS GeForce 9500M GS GeForce 9500M G GeForce 9400M G GeForce 9300M GS GeForce 9300M G GeForce 9200M GS GeForce 9100M G GeForce 8800M GTX GeForce 8800M GTS GeForce 8700M GT GeForce 8600M GT GeForce 8600M GS GeForce 8400M GT GeForce 8400M GS GeForce 8400M G

Quadro NVS 320M Quadro NVS 160M Quadro NVS 150M Quadro NVS 140M Quadro NVS 135M Quadro NVS 130M Nvidia Tesla Tesla C2050/2070 Tesla M2050/M2070 Tesla S2050 Tesla S1070 Tesla M1060 Tesla C1060 Tesla C870 Tesla D870 Tesla S870

Version features and specificationsFeature support (unlisted features are supported for all compute capabilities) Integer atomic functions operating on 32-bit words in global memory No atomicExch() operating on 32-bit floating point values in global memory Integer atomic functions operating on 32-bit words in shared memory atomicExch() operating on 32-bit floating point values in shared memory Integer atomic functions operating on 64-bit words in global memory Warp vote functions Double-precision floating-point operations Atomic functions operating on 64-bit integer values in shared memory Floating-point atomic addition operating on 32-bit words in global and shared memory _ballot() _threadfence_system() _syncthreads_count(), _syncthreads_and(), _syncthreads_or() Surface functions 3D grid of thread block No Yes No Yes Yes Compute capability (version) 1.0 1.1 1.2 1.3 2.x

No

Yes

Compute capability (version) Technical specifications 1.0 1.1 1.2 1.3 Maximum dimensionality of grid of thread blocks Maximum x-, y-, or z-dimension of a grid of thread blocks Maximum dimensionality of thread block Maximum x- or y-dimension of a block Maximum z-dimension of a block Maximum number of threads per block Warp size Maximum number of resident blocks per multiprocessor Maximum number of resident warps per multiprocessor Maximum number of resident threads per multiprocessor Number of 32-bit registers per multiprocessor Maximum amount of shared memory per multiprocessor Number of shared memory banks Amount of local memory per thread Constant memory size Cache working set per multiprocessor for constant memory Cache working set per multiprocessor for texture memory Maximum width for 1D texture reference bound to a CUDA array Maximum width for 1D texture reference bound to linear memory Maximum width and number of layers for a 1D layered texture reference Maximum width and height for 2D texture reference bound to linear memory or a CUDA array 8192 x 512 24 768 8K 32 1024 16 K 16 KB 16 16 KB 64 KB 8 KB Device dependent, between 6 KB and 8 KB 8192 32768 512 32 8 48 1536 32 K 48 KB 32 512 KB 512 64 1024 2 65535 3 1024 2.x 3

2

27

16384 x 2048

65536 x 32768

65536 x 65535

Maximum width, height, and number of layers for a 2D layered texture reference Maximum width, height and depth for a 3D texture reference bound to linear memory or a CUDA array Maximum number of textures that can be bound to a kernel Maximum width for a 1D surface reference bound to a CUDA array Maximum width and height for a 2D surface reference bound to a CUDA array Maximum number of surfaces that can be bound to a kernel Maximum number of instructions per kernel

8192 x 8192 x 512 16384 x 16384 x 2048

2048 x 2048 x 2048

128

8192 Not supported

8192 x 8192

8

2 million

Compute capability (version) Architecture specifications Number of cores for integer and floating-point arithmetic functions operations Number of special function units for single-precision floating-point transcendental functions Number of texture filtering units for every texture address unit or Render Output Unit (ROP) Number of warp schedulers Number of instructions issued at once by scheduler 1.0 1.1 1.2 1.3 2.0 2.1 8[15]

32 4

48 8

2

?

4

8

1 1

2 1

2 2[16]

AdvantagesCUDA has several advantages over traditional general-purpose computation on GPUs (GPGPU) using graphics APIs:

Scattered reads code can read from arbitrary addresses in memory Shared memory CUDA exposes a fast shared memory region (up to 48KB per Multi-Processor) that can be shared amongst threads. This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups.[12] Faster downloads and readbacks to and from the GPU Full support for integer and bitwise operations, including integer texture lookups

Limitations

Texture rendering is not supported (CUDA 3.2 and up addresses this by introducing "surface writes" to cuda Arrays, the underlying opaque data structure). Copying between host and device memory may incur a performance hit due to system bus bandwidth and latency (this can be partly alleviated with asynchronous memory transfers, handled by the GPU's DMA engine) Threads should be running in groups of at least 32 for best performance, with total number of threads numbering in the thousands. Branches in the program code do not impact performance significantly, provided that each of 32 threads takes the same execution path; the SIMD execution model becomes a significant limitation for any inherently divergent task (e.g. traversing a space partitioning data structure during ray tracing). Unlike OpenCL, CUDA-enabled GPUs are only available from Nvidia[13] Valid C/C++ may sometimes be flagged and prevent compilation due to optimization techniques the compiler is required to employ to use limited resources. CUDA (with compute capability 1.x) uses a recursion-free, function-pointer-free subset of the C language, plus some simple extensions. However, a single process must run spread across multiple disjoint memory spaces, unlike other C language runtime environments. CUDA (with compute capability 2.x) allows a subset of C++ class functionality, for example member functions may not be virtual (this restriction will be removed in some future release). [See CUDA C Programming Guide 3.1 - Appendix D.6]

Language bindings

Fortran - FORTRAN CUDA, PGI CUDA Fortran Compiler Lua - KappaCUDA IDL - GPULib Mathematica - CUDALink MATLAB - Parallel Computing Toolbox, Distributed Computing Server,[18] and 3rd party packages like Jacket. .NET - CUDA.NET Perl - KappaCUDA, CUDA::Minimal Python - PyCUDA KappaCUDA Ruby - KappaCUDA Java - jCUDA, JCuda, JCublas, JCufft Haskell - Data.Array.Accelerate .NET - CUDAfy.NET .NET kernel and host code, CURAND, CUBLAS, CUFFT.

Current CUDA architecturesThe current generation CUDA architecture (codename: "Fermi") which is standard on Nvidia's released (GeForce 400 Series [GF100] (GPU) 2010-03-27)[19] GPU is designed from the ground up to natively support more programming languages such as C++. It has eight times the peak double-precision floating-point performance compared to Nvidia's previous-generation Tesla GPU. It also introduced several new features[20] including:

up to 1024 CUDA cores and 3.0 billion transistors on the GTX 590 Nvidia Parallel DataCache technology Nvidia GigaThread engine ECC memory support Native support for Visual Studio

Current and future usages of CUDA architecture

Accelerated rendering of 3D graphics Accelerated interconversion of video file formats Accelerated encryption, decryption and compression Distributed Calculations, such as predicting the native conformation of proteins Medical analysis simulations, for example virtual reality based on CT and MRI scan images. Physical simulations, in particular in fluid dynamics. Real Time Cloth Simulation OptiTex.com - Real Time Cloth Simulation The Search for Extra-Terrestrial Intelligence (SETI@Home) program[21][22]

References1. http://en.wikipedia.org/wiki/CUDA 2. NVIDIA CUDA Programming Guide Version 1.0 3. NVIDIA Clears Water Muddied by Larrabee Shane McGlaun (Blog) - August 5, 2008 - DailyTech 4. First OpenCL demo on a GPU on YouTube 5. DirectCompute Ocean Demo Running on NVIDIA CUDA-enabled GPU on YouTube 6. Giorgos Vasiliadis, Spiros Antonatos, Michalis Polychronakis, Evangelos P. Markatos and Sotiris Ioannidis (September 2008, Boston, MA, USA). "Gnort: High Performance Network Intrusion Detection Using Graphics Processors" (PDF). Proceedings of the 11th International Symposium on Recent Advances in Intrusion Detection (RAID). 7. Schatz, M.C., Trapnell, C., Delcher, A.L., Varshney, A. (2007). "High-throughput sequence alignment using Graphics Processing Units". BMC Bioinformatics 8:474: 474. doi:10.1186/1471-2105-8-474. PMC 2222658. PMID 18070356. 8. Manavski, Svetlin A.; Giorgio Valle (2008). "CUDA compatible GPU cards as efficient hardware accelerators for SmithWaterman sequence alignment". BMC Bioinformatics 9(Suppl 2):S10: S10. doi:10.1186/1471-2105-9-S2-S10. PMC 2323659. PMID 18387198. 9. Pyrit - Google Code http://code.google.com/p/pyrit/ 10. Use your NVIDIA GPU for scientific computing, BOINC official site (December 18, 2008)