Welcome to CUME wiki!
What is CUME ?
CUME stands for CUda Made Easy and aims to simplify the writing of CUDA
C/C++ code.
What do you mean by simplify ?
The major drawbacks of the CUDA API is that:
- it is required to check the return value of each call to a CUDA function
in order to determine if some error occurred before, during or after the call
to the kernel
- there is no automatic way to get the "global thread index" (gtid) of each
thread: the formula of gtid depends of the grid and block configuration
- allocating, freeing and copying arrays is a boring an unproductive task
- the notion of host and device memory is troublesome when you have to
cudaMemcpy(adr1, adr2, size, cudaMemcpyDeviceToHost or HostToDevice)
- some configuration parameters are hardware dependant: for example when
defining a block of a kernel call you must not exceed the maximum number of
threads allowed by the device or you will get an error. It would then be
neccesary to check that the size of the block respects the constraint of the
hardware
To answer all this drawbacks we propose
- a
cume_check macro instruction that is used to check every return
call of CUDA API and other macro instructions like cume_new_var, cume_new_array, cume_free to simplify the call
to CUDA API (see file cume_base.h)
- a Kernel class that will help setup or determine the size of the
grid and block parameters. This class can also be used with the macro
insruction kernel_call (see file cume_kernel.h) in
order to automatically get the right formula to obtain the global thread
index. Another macro instruction cume_kernel_no_resource will use not automatically determine the gtid.
- Array and Matrix classes that handle 1D and 2D arrays data on the host and device memory (see the files cume_array.h and cume_matrix.h)
- cume_push and cume_pull methods : push will transfer data from host to device
memory and pull will transfer from device to host. The methods push and pop are also members of the Array class
- a Devices class to manage GPUs
Download
To download the latest trunk use:
svn checkout svn://svn.code.sf.net/p/cume/svn/trunk cume
Comparison CUDA / CUME
see this page