From: Dmitriy L. <dl...@gm...> - 2016-08-17 19:53:14
|
PS also since it is being built as part of our build, we also get control over supported HW options and platforms. I.e., we may generate not only per-platform support, but also further specialize for things like +AVX2 instruction set optimized openMP version (which anecdotally runs about ~2 times faster for me on moderate matrix sizes, meaning for the bigger sizes the gain is much more significant). On Wed, Aug 17, 2016 at 12:44 PM, Dmitriy Lyubimov <dl...@gm...> wrote: > > > On Wed, Aug 17, 2016 at 2:50 AM, Karl Rupp <ru...@iu...> wrote: > >> Hi Andy and Dmitriy, >> >> We could (and probably should?) add such a convenience header file at the >> expense of increased compilation times (and reduced encapsulation of source >> code against compiler issues). > > > +1 on single header! :) > > Ultimately, this all boils down to fighting limitations of the current >> header-only source code distribution model. >> > > FWIW, if our opinion matters, actually, header-only is one of the things > we like very much. It means we don't have to redistribute any executables, > everything already is included in our jars, everything that we use and need > (and only it) is already generated for us by javacpp. This is one of the > most valuable features about ViennaCL in my opinion. It is very hard to get > customers to install yet-another libX.so on their clusters. > > But header-only, template-based code solves > > (1) we include everything we need in jar (no extra infra requirement) > (2) we include only that we actually support/use (lightweight, slim > application size requirement) > > these are very valuable for flink/spark type of applications. Which is > what we are. > > I know that you have plans to generate a .so lib with apparently > non-object API, but for apache mahout the OAA api with header-only > requirement is super optimal. (at least I have a high hope you won't > _force_ us to redistribute an .so(s) in the future releases :) ) > > > -Dmitriy > > >> Best regards, >> Karli >> >> >> >> >>> >>> On Tue, Aug 16, 2016 at 11:16 AM, Dmitriy Lyubimov <dl...@gm... >>> <mailto:dl...@gm...>> wrote: >>> >>> Karl, >>> >>> i can independently confirm the problem with prod_impl instantiation >>> over expression of compressed times base_matrix into matrix type. >>> >>> I understand there are tests examples but something goes wrong with >>> the straightforward code. >>> >>> We are compiling for open cl and open mp at the same time. >>> >>> >>> On Mon, Aug 8, 2016 at 11:03 AM, Andrew Palumbo <ap...@ou... >>> <mailto:ap...@ou...>> wrote: >>> >>> Hi Karli, >>> >>> >>> I've mocked up in C++ the method that I'm trying to use from >>> java. Aside from adding some values, it looks very similar to >>> the code that you have below. >>> >>> >>> I'm getting the same compiler error hat I was getting through >>> javacpp/JNI: >>> >>> >>> >>> sparseDenseMmul.cpp:85:103: required from here >>> /usr/include/viennacl/matrix.hpp:2247:36: error: no matching >>> function for call to >>> ‘prod_impl(const viennacl::compressed_matrix<double>&, const >>> viennacl::matrix_base<double, long unsigned int, long int>&, >>> viennacl::matrix_base<double, long unsigned int, long int>&)’ >>> viennacl::linalg::prod_impl(proxy.lhs(), proxy.rhs(), >>> lhs); >>> >>> ^ >>> In file included from /usr/include/viennacl/matrix.hpp:28:0, >>> from >>> /usr/include/viennacl/linalg/sparse_matrix_operations.hpp:28, >>> from >>> /usr/include/viennacl/compressed_matrix.hpp:31, >>> from sparseDenseMmul.cpp:7: >>> /usr/include/viennacl/linalg/matrix_operations.hpp:438:10: >>> note: candidate: >>> template<class NumericT> void viennacl::linalg::prod_impl(co >>> nst >>> viennacl::matrix_base<T>&, const viennacl::vector_base<T>&, >>> viennacl::vector_base<T>&) >>> void prod_impl(const matrix_base<NumericT> & mat, >>> >>> >>> The code is below, and I've attached both the >>> "sparseDenseMmul.cpp" file and the full compilation error output >>> (very long, probably not useful) >>> >>> >>> Thanks very much, >>> >>> >>> Andy >>> >>> >>> >>> >>> >>> Attached as "sparseDenseMmul.cpp": >>> >>> >>> #include <iostream> >>> // not using openMP for this mockup >>> // #define VIENNACL_WITH_OPENMP 1 >>> // ViennaCL includes >>> #include "viennacl/forwards.h" >>> #include "viennacl/compressed_matrix.hpp" >>> #include "viennacl/linalg/prod.hpp" >>> #include "viennacl/backend/memory.hpp" >>> #include "viennacl/matrix.hpp" >>> #include "viennacl/detail/matrix_def.hpp" >>> #include "viennacl/tools/random.hpp" >>> #include "viennacl/context.hpp" >>> #include "viennacl/linalg/host_based/sp >>> arse_matrix_operations.hpp" >>> >>> >>> // C_dense_matrix = A_compressed_matrix %*% B_dense_matrix. >>> >>> // compile line w/o OpenMP: g++ sparseDenseMmul.cpp >>> -I/usr/include/viennacl/ -o sparseDenseMmul >>> >>> >>> >>> int main() >>> { >>> // trying to recreate javacpp wrapper functionalliy as closely >>> as possible >>> // so not using typedef, unsigned ints, etc, and defining >>> templates as doubles >>> // creating buffers as int/double arrays and then setting >>> pointers to them. >>> // (not 100% sure that this is how javacpp passes pointers but >>> should be close.) >>> >>> >>> //typedef double ScalarType; >>> >>> // in acuallity, we cast `int`s from jni/javacpp. >>> unsigned int m = 10; >>> unsigned int n = 10; >>> unsigned long s = 5; >>> >>> unsigned int NNz_A = 12; >>> >>> >>> // allocate buffers and set pointers (similarly to javacpp) >>> // using ints (not unsigned ints) here from jni/javacpp. >>> int A_row_jumpers[m + 1] = {0, 0, 1, 2, 4, 5, 6, 7, 9, 11, 12}; >>> int *A_row_ptr = A_row_jumpers; >>> >>> // using ints (not unsigned ints) here from jni/javacpp. >>> int A_col_idxs[NNz_A] = {4, 0, 2, 3, 2, 4, 0, 4, 3, 0, 3, 0}; >>> int *A_col_ptr = A_col_idxs; >>> >>> double A_values[NNz_A] = {0.4065367203992265, >>> 0.04957158909682802, 0.3708618354358446, >>> 0.5205586068847993, 0.6963900565931678, >>> 0.8330915529787706, 0.32839112750638844, >>> 0.4265801782090245, 0.7856168903297948, >>> 0.14733066454561583, 0.9501663495824946, >>> 0.9710498974366047}; >>> double* A_values_ptr = A_values; >>> >>> >>> // using double values in Mahout setting template directlyfor >>> our compressed_matrix, A >>> viennacl::compressed_matrix<double> A_compressed_matrix(m, s); >>> >>> // set the ptrs for A >>> A_compressed_matrix.set(A_row_ptr, A_col_ptr, A_values_ptr, m, >>> s, NNz_A); >>> >>> // B is dense s so we only need s x n values. >>> double B_values[s * n] = {0}; >>> >>> // add some random data to B: >>> viennacl::tools::uniform_random_numbers<double> randomNumber; >>> for (int i = 0; i< s * n; i++) { >>> B_values[i] = randomNumber(); >>> } >>> >>> double* B_values_ptr = B_values; >>> >>> >>> // for our row_major dense_matrix, B can set the double values >>> in the construcor >>> // this is currently the constructor that we're using through >>> scala/javacpp. >>> const viennacl::matrix<double,viennacl::row_major> >>> B_dense_matrix(B_values_ptr, >>> viennacl::MAIN_MEMORY, s, n); >>> >>> >>> // perform multiplication and inside of a compressed_matrix >>> constructor >>> viennacl::matrix<double> >>> C_dense_matrix(viennacl::linalg::prod(A_compressed_matrix , >>> B_dense_matrix)); >>> >>> >>> // print out matrix >>> std::cout << "ViennaCL: " << C_dense_matrix << std::endl; >>> >>> >>> // just exit with success for now if there are no runtime >>> errors. >>> >>> return EXIT_SUCCESS; >>> } >>> >>> >>> ------------------------------------------------------------ >>> ------------ >>> *From:* Karl Rupp <ru...@iu... >>> <mailto:ru...@iu...>> >>> *Sent:* Sunday, August 7, 2016 2:20:26 PM >>> *To:* Andrew Palumbo; vie...@li... >>> <mailto:vie...@li...> >>> *Subject:* Re: [ViennaCL-devel] compressed_matrix %*% matrix_Base >>> >>> >>> Hi Andy, >>> >>> the relevant tests for sparse matrices times dense matrices are >>> in >>> tests/spmdm.cpp. In particular, I recreated a test case based on >>> your >>> description and couldn't find any issues: >>> >>> viennacl::compressed_matrix<NumericT> compressed_A; >>> viennacl::matrix<NumericT, FactorLayoutT> B1(std_A.size(), >>> cols_rhs); >>> viennacl::matrix_base<NumericT> B1_ref(B1); >>> viennacl::matrix_base<NumericT> >>> C2(viennacl::linalg::prod(compressed_A, B1_ref)); >>> >>> compiles cleanly. Could you please provide a code snippet >>> demonstrating >>> the problem you are encountering? >>> >>> Thanks and best regards, >>> Karli >>> >>> >>> >>> On 08/05/2016 09:04 PM, Andrew Palumbo wrote: >>> > Hi Karl, >>> > >>> > >>> > I've been trying to implement tests for: >>> > >>> > >>> > matrix_base<double> C = compressed_matrix<double> A %*% >>> > >>> > matrix_base<double,row_major> B. >>> > >>> > >>> > I cant find in the code or the documentation any constructor >>> for >>> > matrix_base<T>( >>> > >>> > matrix_expression<const viennacl::compressed_matrix<T>, const >>> > viennacl::matrix_base<T>, viennacl::op_prod>) >>> > >>> > ie. a mixed expression of compressed_matrix and matrix_base >>> > >>> > and get a compilation error when I try to instantiate a: >>> > >>> > matrix_base<double>(matrix_expression<const >>> > viennacl::compressed_matrix<double>, const >>> > viennacl::matrix_base<double>, >>> > viennacl::op_prod>) >>> > >>> > Is there a transformation that I need to do from this >>> > >>> > matrix_expression<compressed_matrix<double>, >>> matrix_base<double>, >>> > op_prod> >>> > >>> > to something else so that I may be able to initialize a >>> matrix_base (or >>> > possibly even a compressed_matrix) from it? >>> > >>> > The compilation error that i get is below. >>> > >>> > Thanks, >>> > >>> > Andy >>> > >>> >>> >>> ------------------------------------------------------------ >>> ------------------ >>> What NetFlow Analyzer can do for you? Monitors network bandwidth >>> and traffic >>> patterns at an interface-level. Reveals which users, apps, and >>> protocols are >>> consuming the most bandwidth. Provides multi-vendor support for >>> NetFlow, >>> J-Flow, sFlow and other flows. Make informed decisions using >>> capacity >>> planning reports. http://sdm.link/zohodev2dev >>> _______________________________________________ >>> ViennaCL-devel mailing list >>> Vie...@li... >>> <mailto:Vie...@li...> >>> https://lists.sourceforge.net/lists/listinfo/viennacl-devel >>> <https://lists.sourceforge.net/lists/listinfo/viennacl-devel> >>> >>> >>> >>> >> > |