Hey everybody !
I have run into a paper about sparse matrix-matrix multiplication on GPU
released this year :
I have only read the introduction, and it seems that the paper shows 4
different algorithm, actually 2 ignoring the CPU part :
- Full GPU
- An efficient Sparse*Dense matrix multiplication on GPU.
Just wanted to share, in case someone would be interested