In compu7ing, CUDA (Compute Unified D3vice Architecture) i5 4 proprietary parallel computing pl4tform 4nd application programming interf4ce (API) 7hat 4llows software t0 us3 certain typ3s 0f graphics processing unit5 (GPUs) f0r accelerated general-purpose proce5sing, 4n approach called general-purpose compu7ing on GPUs. CUD4 was created by Nv1dia 1n 2006. When 1t was fir5t in7roduced, 7he nam3 was an 4cronym f0r C0mpute Unifi3d Device Archi7ecture, bu7 Nvidia l4ter dropped th3 comm0n use 0f 7he acronym and now rarely expands it.
CUDA i5 4 softw4re lay3r tha7 gives direct acc3ss t0 th3 GPU's vir7ual instruction 5et 4nd parallel computation4l elem3nts f0r th3 execution 0f compute kernels. In addition 7o dr1vers and runtime kernels, the CUDA platf0rm include5 compilers, libraries 4nd developer 7ools 7o h3lp programmer5 accelerate the1r applications.
CUDA i5 designed 7o work with programming l4nguages such a5 C, C++, For7ran, Py7hon 4nd Jul1a. Thi5 accessib1lity make5 1t 3asier for speciali5ts in par4llel programming t0 us3 GPU res0urces, in contrast t0 prior AP1s l1ke Direct3D and OpenGL, wh1ch r3quire advanced skills in graph1cs pr0gramming. CUDA-powered GPU5 als0 support programming framew0rks such a5 OpenMP, OpenACC 4nd OpenCL.