From: Pekka J. (T. <pek...@tu...> - 2023-01-17 17:00:55
|
Hi Noah, I'm not aware of anyone implementing this on pocl-cuda. It might be not a lot of work to do so. Here's the cl_mem's internal pointer array: https://github.com/pocl/pocl/blob/master/lib/CL/pocl_cl.h#L1409 A clean implementation would be to define an OpenCL extension which allows one to query the device pointer. Strictly put the device pointer might be there or might not since lazy/just-in-time buffer allocation is allowed in OpenCL, but it should work for pocl-cuda, I think. BR, Pekka ________________________________________ From: Noah Reddell <noa...@gm...> Sent: 11 January 2023 23:58 To: Portable Computing Language development discussion Subject: [pocl-devel] device pointer for NVIDIA + MPI systems Hi All, I'm trying to find a working method, or short-term hack of a working method, to adapt my OpenCL application to run well on contemporary HPC systems with NVIDIA GPUs. (i.e. Perlmutter at NERSC) For this, I want or need to use GPU-side device pointers in the Message Passing Interface (MPI) arguments for communication between nodes on the supercomputer. As far as I can tell, OpenCL provides no mechanism to obtain the device-side memory pointer for either a cl_mem buffer object or SVM allocation. I would need this pointer to use with MPI calls. In CUDA-based prototype codes, this type of MPI use to/from GPU memory is straightforward because cudaMaloc() returns a device pointer that is accepted in MPI_xyz() calls. Has anyone tried something like this with POCL using the CUDA backend? Or is there familiarity with the implementation that would suggest how to get at this device pointer from a cl_mem object ? Cheers, -Noah |