## shark-project-user — Mailing list for users of the Shark machine learning library

You can subscribe to this list here.

 2009 2010 2011 2012 2013 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov (5) Dec (3) Jan (2) Feb (3) Mar Apr May (1) Jun (2) Jul Aug (1) Sep (1) Oct (6) Nov (7) Dec (4) Jan Feb Mar (2) Apr (5) May (7) Jun (2) Jul (3) Aug (5) Sep Oct Nov Dec Jan Feb (4) Mar (16) Apr May (2) Jun (10) Jul (17) Aug (35) Sep (16) Oct (3) Nov (3) Dec (1) Jan Feb (13) Mar (8) Apr (21) May (16) Jun (25) Jul (10) Aug (2) Sep (6) Oct (13) Nov (9) Dec (20) Jan (3) Feb Mar (2) Apr (20) May (5) Jun (11) Jul Aug (5) Sep Oct Nov Dec

Showing results of 355

1 2 3 .. 15 > >> (Page 1 of 15)
 Re: [Shark-project-user] SVD crashes with non-square matrix From: oswin krause - 2014-08-28 21:30:56 Attachments: Message as HTML ```Hi Jns, thanks again for the report. This is strange as this *should* have been tested. I will look into it. regarding the moore penrose pseudo inverse: We have it - or better: the solver of the corresponding systems of equation as one rarely needs the Moore Penrose Inverse by itself: LinAlg/solveSystem.h function: blas::generalSolveSystemInPlace(A,B); //solves the system AX=B using the Moore Penrose Inverse and stores the result in B, for XA=B use SolveXAB as the argument Best, Oswin On 28.08.2014 23:15, Jens Burger wrote: > Hi, > > I am trying to implement the Moore-Penrose pseudo-inverse using SVD. > For a start I basically copied the example code from the SVD test file > and ran it. As long as the matrix is square it works fine. > > However, as soon as I define a non-square matrix I run into problems. > - If I compile in debug mode, the program throws the following: > /terminate called after throwing an instance of 'shark::Exception'/ > / what(): size mismatch: m().size2() == e().size2()/ > > - In release mode size checks are not done and I have two different > behaviours. > - If rows > columns the program runs fine > - If rows < columns I get the following: > / *** Error in `main': free(): invalid pointer: 0x0000000000af5720 ***/ > > > Then I also have two comments on the singular value vector *w*. > - I've noticed that the last value is often larger than the second > last one. Hence it is not properly sorted in a descending order. This > is not really a problem as I can sort it myself. I am just not sure if > this was intended. > - Most literature on SVD I found treats *w* as a diagonal matrix > *S* with the same dimension as the input matrix. For my purposes I > need the matrix format and therefore returning *S* instead of *w* > would make save me a few lines of code. But this is really just a > preference, nothing serious. > > Please find the code I used below. > Cheers, > Jens > > const int Dim1= 5; > const int Dim2= 3; > > int main(int argc, char** argv[]) > { > shark::RealMatrix A(Dim1,Dim2); > shark::RealMatrix U(Dim1,Dim1); > shark::RealMatrix V(Dim2,Dim2); > shark::RealVector w(Dim2); > > for (size_t row= 0; row< Dim1; row++) > for (size_t col= 0; col< Dim2; col++) > A(row, col) = (double)rand()/(double)RAND_MAX; > > U.clear(); V.clear(); w.clear(); > > shark::blas::svd(A, U, V, w); > } > > > ------------------------------------------------------------------------------ > Slashdot TV. > Video for Nerds. Stuff that matters. > http://tv.slashdot.org/ > > > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user ```
 [Shark-project-user] SVD crashes with non-square matrix From: Jens Burger - 2014-08-28 21:15:20 Attachments: Message as HTML ```Hi, I am trying to implement the Moore-Penrose pseudo-inverse using SVD. For a start I basically copied the example code from the SVD test file and ran it. As long as the matrix is square it works fine. However, as soon as I define a non-square matrix I run into problems. - If I compile in debug mode, the program throws the following: *terminate called after throwing an instance of 'shark::Exception'* * what(): size mismatch: m().size2() == e().size2()* - In release mode size checks are not done and I have two different behaviours. - If rows > columns the program runs fine - If rows < columns I get the following: * *** Error in `main': free(): invalid pointer: 0x0000000000af5720 **** Then I also have two comments on the singular value vector *w*. - I've noticed that the last value is often larger than the second last one. Hence it is not properly sorted in a descending order. This is not really a problem as I can sort it myself. I am just not sure if this was intended. - Most literature on SVD I found treats *w* as a diagonal matrix *S* with the same dimension as the input matrix. For my purposes I need the matrix format and therefore returning *S* instead of *w* would make save me a few lines of code. But this is really just a preference, nothing serious. Please find the code I used below. Cheers, Jens const int Dim1 = 5;const int Dim2 = 3; int main(int argc, char** argv[]){ shark::RealMatrix A(Dim1,Dim2); shark::RealMatrix U(Dim1,Dim1); shark::RealMatrix V(Dim2,Dim2); shark::RealVector w(Dim2); for (size_t row = 0; row < Dim1; row++) for (size_t col = 0; col < Dim2; col++) A(row, col) = (double)rand()/(double)RAND_MAX; U.clear(); V.clear(); w.clear(); shark::blas::svd(A, U, V, w); } ```
 Re: [Shark-project-user] shark and boost headers From: Jens Burger - 2014-08-18 17:43:59 Attachments: Message as HTML ```Hi Oswin, if I only use the Shark include (#include ) the program compiles just fine. However, I only tried it with a small toy project, as the target project is rather large and full of boost headers already. So ideally, I would not need to change all my ublas containers and boost and Shark can be included side by side. Would that be possible? Please let me know if you need me to try anything else. Thanks for your help. Cheers, Jens 2014-08-15 15:46 GMT-07:00 oswin krause : > Hi Jens, > > could you try to omit the ublas header, i.e. #include > etc? > > The reason is that shark once was based on ublas and got more and more > independent over time (see shark/LinAlg/BLAS). I guess there might be maybe > some confusion regarding include guards there (e.g. we might still have > some #define BOOST_UBLAS_...) > > if that fixes your errors, i will take a look at the include guards :) > > Thanks for the report! > > Regards, > Oswin > > > On 15.08.2014 20:39, Jens Burger wrote: > > Hi All, > > I installed Shark3.0 and have boost 1.55.0 running. Installation worked > all fine and the test programs compiled and executed without problem. > > However, when I try to include a Shark header in an existing project > that contains some boost include files > #include , > #include > I get some compilation errors of the sort (not a complete list): > .../boost/numeric/ublas/traits.hpp:60:17: error: ‘type_deduction_detail’ > does not name a type > .../boost/numeric/ublas/traits.hpp:65:30: error: ‘base_type’ has not > been declared > .../boost/numeric/ublas/traits.hpp:72:45: error: wrong number of template > arguments (1, should be 2) > > I double checked that with boost 1.48.0 and it is the same. > > Is there anything I am not seeing here? > > Thanks a lot > Jens > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > Shark-project-user mailing listShark-project-user@...nethttps://lists.sourceforge.net/lists/listinfo/shark-project-user > > > ```
 Re: [Shark-project-user] shark and boost headers From: oswin krause - 2014-08-15 20:47:03 Attachments: Message as HTML ```Hi Jens, could you try to omit the ublas header, i.e. #include etc? The reason is that shark once was based on ublas and got more and more independent over time (see shark/LinAlg/BLAS). I guess there might be maybe some confusion regarding include guards there (e.g. we might still have some #define BOOST_UBLAS_...) if that fixes your errors, i will take a look at the include guards :) Thanks for the report! Regards, Oswin On 15.08.2014 20:39, Jens Burger wrote: > Hi All, > > I installed Shark3.0 and have boost 1.55.0 running. Installation > worked all fine and the test programs compiled and executed without > problem. > > However, when I try to include a Shark header in an existing project > that contains some boost include files > #include , > #include > I get some compilation errors of the sort (not a complete list): > .../boost/numeric/ublas/traits.hpp:60:17: error: > 'type_deduction_detail' does not name a type > .../boost/numeric/ublas/traits.hpp:65:30: error: 'base_type' has not > been declared > .../boost/numeric/ublas/traits.hpp:72:45: error: wrong number of > template arguments (1, should be 2) > > I double checked that with boost 1.48.0 and it is the same. > > Is there anything I am not seeing here? > > Thanks a lot > Jens > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user ```
 [Shark-project-user] shark and boost headers From: Jens Burger - 2014-08-15 18:39:40 Attachments: Message as HTML ```Hi All, I installed Shark3.0 and have boost 1.55.0 running. Installation worked all fine and the test programs compiled and executed without problem. However, when I try to include a Shark header in an existing project that contains some boost include files #include , #include I get some compilation errors of the sort (not a complete list): .../boost/numeric/ublas/traits.hpp:60:17: error: ‘type_deduction_detail’ does not name a type .../boost/numeric/ublas/traits.hpp:65:30: error: ‘base_type’ has not been declared .../boost/numeric/ublas/traits.hpp:72:45: error: wrong number of template arguments (1, should be 2) I double checked that with boost 1.48.0 and it is the same. Is there anything I am not seeing here? Thanks a lot Jens ```
 Re: [Shark-project-user] Shark-project-user Digest, Vol 30, Issue 2 From: zoot suit - 2014-06-25 06:33:00 Attachments: Message as HTML ```Thanks for the clarification, Oswin. Regards, Ed > the problem is, that there is currently no loss implemented for > Sequences. The Model exists, but it is not yet ready for training. I can > try to g et that done today or tomorrow, but I have got a few things on > my table. > > Regards, > Oswin > > > > > > > >How can I train a RNNet using Shark? Is there a class derived from > > > >shark::AbstractTrainer which I should use? ```
 Re: [Shark-project-user] Problem with building examples in shark 3.0 From: Alexander Hanel - 2014-06-24 17:33:44 ```
Hi,

thank you very much, you solved my problem.

Many greetings

Alex

Gesendet: Montag, 23. Juni 2014 um 10:53 Uhr
Von: "oswin krause" <oswin.krause@...>
An: shark-project-user@...
Betreff: Re: [Shark-project-user] Problem with building examples in shark 3.0
Hi,

this is strange.

could you open shark/src/CMakeLists.txt and remove \${LIB_TYPE} in line 24, i.e.

I am not sure why exactly cmake does not like it. It is also the first time i see this warning, but i can see, why it happens :).

after the change above all the other errors should vanish. I think i will remove the option for windows.

On 22.06.2014 14:56, Alexander Hanel wrote:
Hi,
i'm trying to get shark 3.0 (revision 3272) to work, but i fail to build the examples. my system is win7 32bit with Visual Studio 2010, i'am using cmake 3.0.0 and boost 1.55.0 32bit as installer.

In detail, I follow the youtube video on the shark website to build shark with VS2010. When i run cmake i have to set OPT_DYNAMIC_LIBRARY to true, because otherwise the generating failes with the following error:

Cannot find source file:
DYNAMIC
Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp
.hxx .in .txx

Besides this error i get the following warning when pressing the "configure" button in cmake:

CMake Warning (dev) at Test/CMakeLists.txt:247 (ADD_CUSTOM_COMMAND):
Policy CMP0040 is not set: The target in the TARGET signature of
add_custom_command() must exist. Run "cmake --help-policy CMP0040" for
policy details. Use the cmake_policy command to set the policy and
suppress this warning.
The target name "Data_HDF5" is unknown in this context.
This warning is for project developers. Use -Wno-dev to suppress it.

This message occurrs independently from the value of OPT_DYNAMIC_LIBRARY.

If this parameter is set to true, the generate-step succeeds. After this step I switch to VS2010 (in debug mode), open shark.sln, select the project "shark" as start project, do a right click on this word and select "build new". This step works without errors.
When I now select quickstartTutorial as start project, do a right click and build only this project new, it fails with the following error message:

quickstartTutorial.obj : error LNK2019: Verweis auf nicht aufgelöstes externes Symbol (="unresolved external symbol") ""public: virtual void __thiscall shark::LDA::train(class
shark::LinearClassifier<class shark::blas::vector<double> > &,class shark::LabeledData<class shark::blas::vector<double>,unsigned int> const &)" (?
train@...@shark@@UAEXAAV?\$LinearClassifier@...?\$vector@...@blas@...@@@2@...?\$LabeledData@...?\$vector@...@blas@...@@I@...@@Z)" in Funktion "__catch\$_main\$0".

quickstartTutorial.obj : error LNK2019: Verweis auf nicht aufgelöstes externes Symbol ""void __cdecl shark::importCSV(class shark::LabeledData<class
shark::blas::vector<double>,unsigned int> &,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,enum
shark::LabelPosition,char,char,unsigned int)" (?importCSV@...@@YAXAAV?\$LabeledData@...?\$vector@...@blas@...@@I@...@V?\$basic_string@...?\$char_traits@...@std@@V?
\$allocator@...@2@@std@@W4LabelPosition@...@DDI@...)" in Funktion "_main".

C:\[...]\SharkTest\examples\bin\Debug\quickstartTutorial.exe : fatal error LNK1120: 2 nicht aufgelöste externe Verweise.

At this point, i have no ideas how to fix this problem. Can you help me, please?

Alex

------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems;

_______________________________________________ Shark-project-user mailing list Shark-project-user@... https://lists.sourceforge.net/lists/listinfo/shark-project-user;

------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems_______________________________________________; Shark-project-user mailing list Shark-project-user@... https://lists.sourceforge.net/lists/listinfo/shark-project-user
;
```
 [Shark-project-user] Recent Changes in Shark From: oswin krause - 2014-06-23 08:17:22 ```Hi, i just want to notify about recent changes in the interfaces etc. 1. DataObjectiveFunction is removed and accordingly there is no setData method anymore. In short: this was considered not to be a very useful method/base class and thus got removed to make the whole thing easier to grasp. Instead the data for Error Functions etc are now set via the *first* argument in the constructor. I tried to unify this across all objective functions. The only exception right now are the error functions of the RBMs, but this will be changed shortly. This also means that the optimization trainer had to be changed. It is now always using the ErrorFunction for now and only loss/optimizer/stopping criterion can be changed. 2. FFNet got reworked. It does not allow arbitrary structures any more but only three different types: struct FFNetStructures{ enum ConnectionType{ Normal, //< Layerwise connectivity without shortcuts InputOutputShortcut, //< Normal with additional shortcuts from input to output neuron Full //< Every layer is fully connected to all neurons in the lower layer }; }; usage is: std::size_t input,hidden,output; //choose values as you like bool bias = true; //whether bias neurons are to be used network.setStructure(input,hidden,output,FFNetStructures::Normal, bias); and of course it is still allowed to have an arbitrary amount of hidden layers. This change allowed us to make the network even faster while reducing code complexity. 3. Basic support for weighted datasets. LDA and CSVMTrainer now support weighted datasets which can be found in Data/WeightedDataset.h There are also new base classes: WeightedAbstract(Unsupervised)Trainer which can be found in ObjectiveFunctions/Trainers/ In short: we now have two datasets which allow every data point to have a (positive) weight signalising its importance. That means that if one point has weight three and all others have weight one, this is equivalent to an unweighted dataset where this point is included three times. An example application is bootstrapping. Another very easy application is giving all data points from one class a different weight - which allows for balancing of unbalanced datasets. For more information, please check out WeightedDataset.h as there is not yet a documentation for it. 4. ErrorFunction does not support cost functions any more. This is more a cosmetic change as this was never used. Regards, Oswin Krause ```
 Re: [Shark-project-user] converting data to RealVector From: aneesh chauhan - 2014-06-23 08:06:23 Attachments: Message as HTML ```Hi Oswin, Thanks for the quick response. Yes, I am creating a dataset from the data, and now I am doing it the way you suggest. One of the things I noticed was that, using your proposal, the processing time increased quite a bit, in comparison to how I was initially creating the dataset. I was doing the following: Data dataset = createDataFromRange(points); .... (1) Your suggestion was: Data dataset(input_data.size(), RealVector(1)); .... (2) Solution was to put provide a "reasonable" batch size in (2) when creating the dataset. Again, thanks a lot for the responding quickly. Best regards, Bakhol On Wed, Jun 18, 2014 at 5:21 PM, oswin krause < oswin.krause@...> wrote: > Hi Bakhol, > > I am assuming, you want to create a Dataset from it, right? One way to do > this should be: > > Data dataset(input_data.size(), RealVector(1)); > for(std::size_t i = 0; i != input_data.size(); ++i){ > dataset.element(i)(0) = input_data[i]; > } > > this skips the intermediate step of having a vector or RealVectors. > > Regards, > Oswin > > > On 17.06.2014 18:24, aneesh chauhan wrote: > > Dear Shark-ers, > > I am new to the library and am facing a (hopefully trivial) problem. > I want to use some of the functionality of this library within the tool I > am developing. > More specifically, I am using the KMeans implementation available with > Shark. > > The problem is the following: > The data I receive is a vector of doubles and I want to convert it to > RealVector, so I do a "point-by-point" conversion, as so: > > std::vector > convertToRealVector ( std::vector const& input_data) > { > > std::vector points; > std::vector ::const_iterator itr; > > for (itr = input_data.begin(); itr != input_data.end(); ++itr ) > { > double p = *itr; > > RealVector point(1); > init(point) << p; > > points.push_back(point); > } > > return points; > } > > > It seems obvious in my mind that there must be a better way to do such > data conversion. But I have been unable to find a way. Could someone please > direct me into the correct direction? > > Thanks in advance, > Bakhol > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Explorationhttp://p.sf.net/sfu/hpccsystems > > > > _______________________________________________ > Shark-project-user mailing listShark-project-user@...nethttps://lists.sourceforge.net/lists/listinfo/shark-project-user > > > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user > > ```
 Re: [Shark-project-user] Training RNN From: oswin krause - 2014-06-23 07:07:11 Attachments: Message as HTML ```Hi, the problem is, that there is currently no loss implemented for Sequences. The Model exists, but it is not yet ready for training. I can try to g et that done today or tomorrow, but I have got a few things on my table. Regards, Oswin On 19.06.2014 08:51, zoot suit wrote: > Hi Christian, thanks for the reply! > > My data is a time series of control signals and audio measurements. > I tried to start by using squared loss. The following fails > compilation with the message > "error C2660: > 'shark::ErrorFunction::ErrorFunction' : function > does not take 2 arguments" > > RecurrentStructure networkStructure; > unsigned numInput=2; > unsigned numHidden=2; > unsigned numOutput=1; > networkStructure.setStructure(numInput,numHidden,numOutput); > RNNet network(&networkStructure); > SquaredLoss<> loss; > ErrorFunction<> errorFunction(&network,&loss); > > I guess there is a template parameter mismatch because RNNet is > AbstractModel. However, the following also fails > compilation with the message > "squaredloss.h(70) : error C2039: 'size1' : is not a member of > 'std::vector<_Ty>'" > > SquaredLoss loss; > > I need a toy example of training a RNNet which will compile and run. > > Regards, Ed > > > Could you be more specific? > > What does your data look like? > > What error function would you like to use? > > > > Currently, the Shark RNN is not fully complete. > > > > Cheers, > > Christian > > > > > > >I want to use labelled data sequences to train a RNN, then use the > > >trained RNN to predict labels for a test sequence. > > > > > >How can I train a RNNet using Shark? Is there a class derived from > > >shark::AbstractTrainer which I should use? > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > > > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user ```
 Re: [Shark-project-user] Problem with building examples in shark 3.0 From: oswin krause - 2014-06-23 06:54:09 Attachments: Message as HTML ```Hi, this is strange. could you open shark/src/CMakeLists.txt and remove \${LIB_TYPE} in line 24, i.e. ADD_LIBRARY( shark \${LIB_TYPE} \${SRCS} ) ->ADD_LIBRARY( shark \${SRCS} ) I am not sure why exactly cmake does not like it. It is also the first time i see this warning, but i can see, why it happens :). after the change above all the other errors should vanish. I think i will remove the option for windows. On 22.06.2014 14:56, Alexander Hanel wrote: > Hi, > i'm trying to get shark 3.0 (revision 3272) to work, but i fail to > build the examples. my system is win7 32bit with Visual Studio 2010, > i'am using cmake 3.0.0 and boost 1.55.0 32bit as installer. > In detail, I follow the youtube video on the shark website to build > shark with VS2010. When i run cmake i have to set OPT_DYNAMIC_LIBRARY > to true, because otherwise the generating failes with the following error: > CMake Error at src/CMakeLists.txt:24 (ADD_LIBRARY): > Cannot find source file: > DYNAMIC > Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp > .hxx .in .txx > Besides this error i get the following warning when pressing the > "configure" button in cmake: > CMake Warning (dev) at Test/CMakeLists.txt:247 (ADD_CUSTOM_COMMAND): > Policy CMP0040 is not set: The target in the TARGET signature of > add_custom_command() must exist. Run "cmake --help-policy CMP0040" for > policy details. Use the cmake_policy command to set the policy and > suppress this warning. > The target name "Data_HDF5" is unknown in this context. > This warning is for project developers. Use -Wno-dev to suppress it. > This message occurrs independently from the value of OPT_DYNAMIC_LIBRARY. > If this parameter is set to true, the generate-step succeeds. After > this step I switch to VS2010 (in debug mode), open shark.sln, select > the project "shark" as start project, do a right click on this word > and select "build new". This step works without errors. > When I now select quickstartTutorial as start project, do a right > click and build only this project new, it fails with the following > error message: > quickstartTutorial.obj : error LNK2019: Verweis auf nicht aufgelöstes > externes Symbol (="unresolved external symbol") ""public: virtual void > __thiscall shark::LDA::train(class > shark::LinearClassifier > &,class > shark::LabeledData,unsigned int> > const &)" (? > train@...@shark@@UAEXAAV?\$LinearClassifier@...?\$vector@...@blas@...@@@2@...?\$LabeledData@...?\$vector@...@blas@...@@I@...@@Z)" > in Funktion "__catch\$_main\$0". > quickstartTutorial.obj : error LNK2019: Verweis auf nicht aufgelöstes > externes Symbol ""void __cdecl shark::importCSV(class > shark::LabeledData shark::blas::vector,unsigned int> &,class > std::basic_string,class > std::allocator >,enum > shark::LabelPosition,char,char,unsigned int)" > (?importCSV@...@@YAXAAV?\$LabeledData@...?\$vector@...@blas@...@@I@...@V?\$basic_string@...?\$char_traits@...@std@@V? > \$allocator@...@2@@std@@W4LabelPosition@...@DDI@...)" in Funktion "_main". > C:\[...]\SharkTest\examples\bin\Debug\quickstartTutorial.exe : fatal > error LNK1120: 2 nicht aufgelöste externe Verweise. > > At this point, i have no ideas how to fix this problem. Can you help > me, please? > Thank you in advance > Alex > > > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > > > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user ```
 [Shark-project-user] Problem with building examples in shark 3.0 From: Alexander Hanel - 2014-06-22 12:56:55 ```
Hi,
i'm trying to get shark 3.0 (revision 3272) to work, but i fail to build the examples. my system is win7 32bit with Visual Studio 2010, i'am using cmake 3.0.0 and boost 1.55.0 32bit as installer.

In detail, I follow the youtube video on the shark website to build shark with VS2010. When i run cmake i have to set OPT_DYNAMIC_LIBRARY to true, because otherwise the generating failes with the following error:

Cannot find source file:
DYNAMIC
Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp
.hxx .in .txx

Besides this error i get the following warning when pressing the "configure" button in cmake:

CMake Warning (dev) at Test/CMakeLists.txt:247 (ADD_CUSTOM_COMMAND):
Policy CMP0040 is not set: The target in the TARGET signature of
add_custom_command() must exist. Run "cmake --help-policy CMP0040" for
policy details. Use the cmake_policy command to set the policy and
suppress this warning.
The target name "Data_HDF5" is unknown in this context.
This warning is for project developers. Use -Wno-dev to suppress it.

This message occurrs independently from the value of OPT_DYNAMIC_LIBRARY.

If this parameter is set to true, the generate-step succeeds. After this step I switch to VS2010 (in debug mode), open shark.sln, select the project "shark" as start project, do a right click on this word and select "build new". This step works without errors.
When I now select quickstartTutorial as start project, do a right click and build only this project new, it fails with the following error message:

quickstartTutorial.obj : error LNK2019: Verweis auf nicht aufgelöstes externes Symbol (="unresolved external symbol") ""public: virtual void __thiscall shark::LDA::train(class
shark::LinearClassifier<class shark::blas::vector<double> > &,class shark::LabeledData<class shark::blas::vector<double>,unsigned int> const &)" (?
train@...@shark@@UAEXAAV?\$LinearClassifier@...?\$vector@...@blas@...@@@2@...?\$LabeledData@...?\$vector@...@blas@...@@I@...@@Z)" in Funktion "__catch\$_main\$0".

quickstartTutorial.obj : error LNK2019: Verweis auf nicht aufgelöstes externes Symbol ""void __cdecl shark::importCSV(class shark::LabeledData<class
shark::blas::vector<double>,unsigned int> &,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,enum
shark::LabelPosition,char,char,unsigned int)" (?importCSV@...@@YAXAAV?\$LabeledData@...?\$vector@...@blas@...@@I@...@V?\$basic_string@...?\$char_traits@...@std@@V?
\$allocator@...@2@@std@@W4LabelPosition@...@DDI@...)" in Funktion "_main".

C:\[...]\SharkTest\examples\bin\Debug\quickstartTutorial.exe : fatal error LNK1120: 2 nicht aufgelöste externe Verweise.

At this point, i have no ideas how to fix this problem. Can you help me, please?

Alex

```
 Re: [Shark-project-user] Training RNN From: zoot suit - 2014-06-19 06:51:38 Attachments: Message as HTML ```Hi Christian, thanks for the reply! My data is a time series of control signals and audio measurements. I tried to start by using squared loss. The following fails compilation with the message "error C2660: 'shark::ErrorFunction::ErrorFunction' : function does not take 2 arguments" RecurrentStructure networkStructure; unsigned numInput=2; unsigned numHidden=2; unsigned numOutput=1; networkStructure.setStructure(numInput,numHidden,numOutput); RNNet network(&networkStructure); SquaredLoss<> loss; ErrorFunction<> errorFunction(&network,&loss); I guess there is a template parameter mismatch because RNNet is AbstractModel. However, the following also fails compilation with the message "squaredloss.h(70) : error C2039: 'size1' : is not a member of 'std::vector<_Ty>'" SquaredLoss loss; I need a toy example of training a RNNet which will compile and run. Regards, Ed > Could you be more specific? > What does your data look like? > What error function would you like to use? > > Currently, the Shark RNN is not fully complete. > > Cheers, > Christian > > > >I want to use labelled data sequences to train a RNN, then use the > >trained RNN to predict labels for a test sequence. > > > >How can I train a RNNet using Shark? Is there a class derived from > >shark::AbstractTrainer which I should use? ```
 Re: [Shark-project-user] converting data to RealVector From: oswin krause - 2014-06-18 13:22:15 Attachments: Message as HTML ```Hi Bakhol, I am assuming, you want to create a Dataset from it, right? One way to do this should be: Data dataset(input_data.size(), RealVector(1)); for(std::size_t i = 0; i != input_data.size(); ++i){ dataset.element(i)(0) = input_data[i]; } this skips the intermediate step of having a vector or RealVectors. Regards, Oswin On 17.06.2014 18:24, aneesh chauhan wrote: > Dear Shark-ers, > > I am new to the library and am facing a (hopefully trivial) problem. > I want to use some of the functionality of this library within the > tool I am developing. > More specifically, I am using the KMeans implementation available with > Shark. > > The problem is the following: > The data I receive is a vector of doubles and I want to convert it > to RealVector, so I do a "point-by-point" conversion, as so: > > std::vector > convertToRealVector ( std::vector const& input_data) > { > > std::vector points; > std::vector ::const_iterator itr; > > for (itr = input_data.begin(); itr != input_data.end(); ++itr ) > { > double p = *itr; > > RealVector point(1); > init(point) << p; > > points.push_back(point); > } > > return points; > } > > > It seems obvious in my mind that there must be a better way to do such > data conversion. But I have been unable to find a way. Could someone > please direct me into the correct direction? > > Thanks in advance, > Bakhol > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > > > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user ```
 [Shark-project-user] converting data to RealVector From: aneesh chauhan - 2014-06-17 16:24:50 Attachments: Message as HTML ```Dear Shark-ers, I am new to the library and am facing a (hopefully trivial) problem. I want to use some of the functionality of this library within the tool I am developing. More specifically, I am using the KMeans implementation available with Shark. The problem is the following: The data I receive is a vector of doubles and I want to convert it to RealVector, so I do a "point-by-point" conversion, as so: std::vector convertToRealVector ( std::vector const& input_data) { std::vector points; std::vector ::const_iterator itr; for (itr = input_data.begin(); itr != input_data.end(); ++itr ) { double p = *itr; RealVector point(1); init(point) << p; points.push_back(point); } return points; } It seems obvious in my mind that there must be a better way to do such data conversion. But I have been unable to find a way. Could someone please direct me into the correct direction? Thanks in advance, Bakhol ```
 [Shark-project-user] Training RNN From: zoot suit - 2014-06-16 06:30:32 Attachments: Message as HTML ```Hi, I want to use labelled data sequences to train a RNN, then use the trained RNN to predict labels for a test sequence. How can I train a RNNet using Shark? Is there a class derived from shark::AbstractTrainer which I should use? RecurrentStructure networkStructure; unsigned numInput=2; unsigned numHidden=2; unsigned numOutput=1; networkStructure.setStructure(numInput,numHidden,numOutput); RNNet network(&networkStructure); //Now what...? Thanks, Ed ```
 Re: [Shark-project-user] MO-CMA-ES creating only invalid offsprings From: François-Michel De Rainville - 2014-05-06 00:11:35 ```I think there still a little bug in Shark, in the MOCMA::step function there is a double evaluation, one in the for loop and the other right after. Regards, François-Michel Le 2014-05-04 à 09:19, oswin krause a écrit : > Sounds great! > > are the results consistent with the MOCMA in shark? I think having two reference implementations in the wild is a good thing, but then they should match as close as possilbe (modulo random number generator, seeds and the like). > > Regards, > Oswin > > On 02.05.2014 22:13, François-Michel De Rainville wrote: >> I wanted to thank all for the support with Shark and let you know that I successfully reimplemented the MO-CMA-ES with DEAP with the help of your comprehensive implementation. >> >> Thanks again, >> Regards, >> François-Michel ```
 [Shark-project-user] Another report about building Shark in Visual Studio 2013 (svn revision 3244) From: Director De Juego - 2014-05-04 21:32:23 Attachments: Message as HTML ```by DJuego, your humble servant, :-) Environment: Windows 8.1 64b Visual Studio 2013 SP1 ------------------------ boost-1.55.0 (svn revision 87799) <- The current published release 1.55.0 need patches for building with Visual Studio 2013. --------------------------- > cmake -G 'NMake Makefiles' -D CMAKE_BUILD_TYPE=Debug -D CMAKE_INSTALL_PREFIX=\$DIRECTORIO_INSTALACION_LIBRERIA -D Boost_INCLUDE_DIR=\$DIRECTORIO_INCLUDE_BOOST -D Boost_PROGRAM_OPTIONS_LIBRARY_DEBUG=\$DIRECTORIO_LIB_BOOST/boost_program_options-vc120-mt-gd-1_55.lib -D Boost_FILESYSTEM_LIBRARY_DEBUG=\$DIRECTORIO_LIB_BOOST/boost_filesystem-vc120-mt-gd-1_55.lib -D Boost_SYSTEM_LIBRARY_DEBUG=\$DIRECTORIO_LIB_BOOST/libboost_system-vc120-mt-gd-1_55.lib -D OPT_DYNAMIC_LIBRARY:BOOL=OFF -D OPT_MAKE_TESTS:BOOL=ON -D OPT_COMPILE_EXAMPLES:BOOL=OFF -D OPT_COMPILE_DOCUMENTATION:BOOL=OFF -D DOXYGEN_EXECUTABLE=\$ARCHIVO_DOXYGEN -D PYTHON_EXECUTABLE=\$ARCHIVO_PYTHON_3 ../.. >nmake Instalacion Shark -- The CXX compiler identification is MSVC 18.0.21005.1 -- Check for working CXX compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- Check for working CXX compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- The C compiler identification is MSVC 18.0.21005.1 -- Check for working C compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- Check for working C compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- C++ compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- C compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- Boost version: 1.55.0 -- Found the following Boost libraries: -- system -- date_time -- filesystem -- program_options -- signals -- serialization -- thread -- unit_test_framework -- Using boost from w:/Archivos_de_Programa/MSVC2013/x32/boost-trunk/lib -- HDF5 not found, skip -- Win32 -- Configuring done -- Generating done -- Build files have been written to: P:/Plataformas/x32-x64/TRABAJO_MSVC2013_x32/Shark/builds/debug_lib Scanning dependencies of target Version [ 0%] Building CXX object src/CMakeFiles/Version.dir/Core/Version.cpp.obj Version.cpp Linking CXX executable bin\Version.exe [ 0%] Built target Version Scanning dependencies of target shark [ 0%] Building CXX object src/CMakeFiles/shark.dir/Models/Softmax.cpp.obj Softmax.cpp p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(292) : error C2143: syntax error : missing ';' before '<' p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(362) : see reference to class template instantiation 'boost::iterator_facade_fixed' being compiled p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(292) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(294) : error C2334: unexpected token(s) preceding '{'; skipping apparent function body -- The CXX compiler identification is MSVC 18.0.21005.1 -- Check for working CXX compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- Check for working CXX compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- The C compiler identification is MSVC 18.0.21005.1 -- Check for working C compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- Check for working C compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- C++ compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- C compiler: c:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe -- Boost version: 1.55.0 -- Found the following Boost libraries: -- system -- date_time -- filesystem -- program_options -- signals -- serialization -- thread -- unit_test_framework -- Using boost from w:/Archivos_de_Programa/MSVC2013/x32/boost-trunk/lib -- HDF5 not found, skip -- Win32 -- Configuring done -- Generating done -- Build files have been written to: P:/Plataformas/x32-x64/TRABAJO_MSVC2013_x32/Shark/builds/release_lib Scanning dependencies of target Version [ 0%] Building CXX object src/CMakeFiles/Version.dir/Core/Version.cpp.obj Version.cpp Linking CXX executable bin\Version.exe [ 0%] Built target Version Scanning dependencies of target shark [ 0%] Building CXX object src/CMakeFiles/shark.dir/Models/Softmax.cpp.obj Softmax.cpp p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(292) : error C2143: syntax error : missing ';' before '<' p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(362) : see reference to class template instantiation 'boost::iterator_facade_fixed' being compiled p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(292) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int p:\plataformas\x32-x64\trabajo_msvc2013_x32\shark\include\shark\core\utility\Impl/boost_iterator_facade_fixed.hpp(294) : error C2334: unexpected token(s) preceding '{'; skipping apparent function body ```
 Re: [Shark-project-user] MO-CMA-ES creating only invalid offsprings From: François-Michel De Rainville - 2014-05-04 15:40:22 Attachments: Message as HTML ```At first look it is pretty close. I was looking at some measures to compare the two algorithms in addition to the final hypervolume. I thought of the number of successful mutations, any other thought? On May 4, 2014 9:19 AM, "oswin krause" wrote: > Sounds great! > > are the results consistent with the MOCMA in shark? I think having two > reference implementations in the wild is a good thing, but then they should > match as close as possilbe (modulo random number generator, seeds and the > like). > > Regards, > Oswin > > On 02.05.2014 22:13, François-Michel De Rainville wrote: > >> I wanted to thank all for the support with Shark and let you know that I >> successfully reimplemented the MO-CMA-ES with DEAP with the help of your >> comprehensive implementation. >> >> Thanks again, >> Regards, >> François-Michel >> > ```
 Re: [Shark-project-user] MO-CMA-ES creating only invalid offsprings From: oswin krause - 2014-05-04 13:20:05 ```Sounds great! are the results consistent with the MOCMA in shark? I think having two reference implementations in the wild is a good thing, but then they should match as close as possilbe (modulo random number generator, seeds and the like). Regards, Oswin On 02.05.2014 22:13, François-Michel De Rainville wrote: > I wanted to thank all for the support with Shark and let you know that > I successfully reimplemented the MO-CMA-ES with DEAP with the help of > your comprehensive implementation. > > Thanks again, > Regards, > François-Michel ```
 Re: [Shark-project-user] MO-CMA-ES creating only invalid offsprings From: François-Michel De Rainville - 2014-05-02 20:14:25 Attachments: Message as HTML ```I wanted to thank all for the support with Shark and let you know that I successfully reimplemented the MO-CMA-ES with DEAP with the help of your comprehensive implementation. Thanks again, Regards, François-Michel 2014-04-28 12:26 GMT-04:00 Christian Igel : > Dear François-Michel, > > Thank you for your understanding. > Please keep us informed about your Shark user experience. > > Best regards, > Christian > > On Apr 28, 2014, at 15:17 , François-Michel De Rainville wrote: > > > I agree, there is not much development teams out there that respond that > quickly to bug reports! > > > > You shouldn't be embarassed, I totally understand that Shark is going a > major refactoring and keeping things working is often very difficult. I > might have seemed impatient, but it is not the case, as you probably have > noticed english is not my primary language, I may have used too strong > words for what I meant. I'll continue testing and reporting stuff if I find > something else. Isn't it what open source is made for? :) > > > > Thanks again, > > Shark 3.0 looks very promising! > > Cheers, > > François-Michel > > > > > > 2014-04-28 4:18 GMT-04:00 Christian Igel : > > Dear François-Michel, > > > > Thank you for your feedback. Oswin was so kind to quickly apply a fix. > > > > > I'm still trying to figure out the difference between your > implementation and mine, and now (version 3234) it seems that the > MOCMASimple tutorial gives all NaN individuals in 30D (tested for ZDT1 and > ZDT2) after a couple of generations. I tracked the bug up to an all 0 > evolution path, but I don't have much time to investigate any further. > > > > > > > Thank you, your input has been very valuable. > > Please try again and contact us if you get results that are strange > and/or do not match the published results. > > > > > Would you know by any chance the version number used for the paper > experiment? > > > > > > > Unfortunately, that's difficult. Which paper are you referring to? > > I programmed the MO-CMA-ES used in the Evolutionary Computation paper. I > added the code to Shark 2 and did not touch it afterwards (except for the > 1+1 covariance matrix update). > > Now, we rewrote Shark and the Shark 2 code is not in the current trunk. > I would not rule out that I/one could dig out the Shark code from 2006/2007 > and compile it - but this sounds really scary. > > Anyway, the current version of the MO-CMA-ES algorithm is the one > published by Thomas Voss et al. at GECCO. Thomas, who left academia, > developed his own code based on mine parallel to the work on Shark 3. The > Shark 3 MO-CMA-ES code is based on Thomas' code, kindly cleaned and adopted > by Oswin. > > > > To make a long story short, I am embarrassed that you cannot reproduce > the results from the papers with the current version out of the box. The > best and most sustainable solution is to repair and improve the Shark 3 > version until it works at least as good as it did before. However, we are > currently not working with the MO-CMA-ES, thus, we need feedback about bugs > and strange behavior. > > > > Kind regards, > > Christian > > > > > > > > > > > > > > > > > Best regards, > > > François-Michel > > > > > > > > > Le 2014-04-16 à 14:30, oswin krause a écrit : > > > > > >> Hi, > > >> > > >> arrgh. I looked at this code at least 10 times today, every time > saying: "of course its min"...it has to be max :( > > >> > > >> my local version is working now, i found two other (albeit much > smaller) bugs and added a few unit tests. The MOCMA computes the > hypervolume on ZDT1 with 2 objectives and mu=10 up to a difference 0.002 > compared to the results here: > > >> > > >> > http://www.tik.ee.ethz.ch/sop/download/supplementary/testproblems/zdt1/ > > >> > > >> I will upload the fixes tomorrow, i first want to make the unit tests > a bit better before i tell you that this version is now reliable :) > > >> > > >> > > >> On 16.04.2014 20:10, François-Michel De Rainville wrote: > > >>> I may be completely lost, but it looks like the > HypervolumeIndicator::leastContributor function does its std::min_element > (on i) on the hypervolume of the point sets A\A[i] instead of the > min_element of the differences of hypervolumes between A and A\A[i]. > > >>> > > >>> Best regards > > >>> > > >>> > > >>> 2014-04-16 9:13 GMT-04:00 François-Michel De Rainville < > f.derainville@...>: > > >>> While trying to find the source of the bug I've found something > strange in PenalizingEvaluator. When an individual is valid it is evaluated > twice shouldn't there be an "else" clause to the if( feasible )? > > >>> > > >>> > > >>> 2014-04-16 3:19 GMT-04:00 oswin krause < > oswin.krause@...>: > > >>> > > >>> Hi François-Michel, > > >>> > > >>> Thanks for another report. I think i found out what might cause this > issue (a NaN goes a long way...). Could you for now try to declare the > MOCMA as: > > >>> > > >>> shark::MOCMA mocma; > > >>> mocma.notionOfSuccess() = shark::MOCMA::IndividualBased; > > >>> > > >>> This should give me time to fix things. > > >>> could you tell me whether this change leads to the correct results > in your computations? thanks! > > >>> > > >>> Regards, > > >>> Oswin > > >>> > > >>> > > >>> On 15.04.2014 15:26, François-Michel De Rainville wrote: > > >>>> With version 3207 I have a segmentation fault when setting the > DTLZ2 problem to 3 objectives and 30 variables (I haven't touched anything > else). GDB backtrace is : > > >>>> > > >>>> Program received signal SIGSEGV, Segmentation fault. > > >>>> 0x000000000046bb2e in > shark::HypervolumeCalculator::stream::view_reference > > >, > std::allocator::view_reference > > > > >, shark::FitnessExtractor, shark::blas::vector > > (this=0x7fffffffda10, regionLow=..., regionUp=..., points=..., > > >>>> extractor=..., split=0, cover=-nan(0x8000000000000)) > > >>>> at > /gel/usr/fmder1/dev/Shark/include/shark/Algorithms/DirectSearch/HypervolumeCalculator.h:409 > > >>>> 409 if( extractor( points[i] )[piles[i]] < trellis[piles[i]] ) { > > >>>> > > >>>> > > >>>> Regards, > > >>>> François-Michel > > >>>> > > >>>> > > >>>> 2014-04-15 4:17 GMT-04:00 oswin krause < > oswin.krause@...>: > > >>>> Hi, > > >>>> > > >>>> i found one bug already that I fixed. This is a bit unfortunate as > it is actually a bug in LinAlg which I will fix later (no, nothing > important as long as you don't multiply double values with integer > vectors...). We now have a test case for the penalizedEvaluator component. > I will upload it later today. I think this accounts fully for the bug given > that the penalty value is 1.e-6. and 3917*1e-6 <1 and thus the penalty will > be 0. > > >>>> > > >>>> Thanks again for the report. I will now take a closer look at the > single parts :) hopefully i did not mess up more during the rewrite(it was > supposed to remove bugs, not adding them). > > >>>> > > >>>> > > >>>> On 14.04.2014 15:03, oswin krause wrote: > > >>>>> Hi, > > >>>>> > > >>>>> thanks for your report. We have recently rewritten the whole > package and it might be that we created a new set of errors during its > implementation. We are currently in the process of testing it. Could you > try an older svn version? especially revision 3128 should be okay. > > >>>>> > > >>>>> Thanks again for the report and sorry for the inconvenience. > > >>>>> > > >>>>> On 14.04.2014 14:54, François-Michel De Rainville wrote: > > >>>>>> Sorry for double posting, but I think something went wrong with > the last post. I received a confirmation number from the mailing list but > when I click on it, it tells me it is invalid. > > >>>>>> > > >>>>>> ---- > > >>>>>> > > >>>>>> Dear Shark developers, > > >>>>>> > > >>>>>> I was trying to implement MO-CMA-ES for DEAP[1] and I encountered > a very strange behaviour. Based on the implementation of the 2010 GECCO > paper by T. Voss, N. Hansen and C. Igel and the test problems presented > (zdt1 in 30D for example), the strategy is unable to generate a single > valid individual during the complete evolution process. Moreover, the > average distance between the individual and its closest feasible version > grows above 1000 after 250 generations. > > >>>>>> > > >>>>>> (Note that I use the hypervolume computation by Simon Wessing in > Python) > > >>>>>> > > >>>>>> Then I decided to try the implementation available in Shark and > fumbled on the exact same problem. In the MOCMA.h file I added a simple > counter*. I also changed the function in the original MOCMASimple.tpp to > zdt1 in 30 dimensions (zdt1.setNumberOfVariables(30)) and printed the > counter on each generation. The end result is an invalid count of 24800 > (which is exactly the number of created offsprings) and the average > distance is 3917.00. > > >>>>>> > > >>>>>> Another thing, I found that a test for equality between the > values in penalizedFitness and unpenalizedFitness returns always true > (equal) even for invalid individuals. I don't know if it is the origin of > the above described behaviour. > > >>>>>> > > >>>>>> * Here is the code : https://gist.github.com/fmder/10287805 > > >>>>>> > > >>>>>> [1] deap.gel.ulaval.ca > > >>>>>> > > >>>>>> Cheers, > > >>>>>> François-Michel > > >>>>>> > > >>>>>> > > >>>>>> > > >>>>>> > ------------------------------------------------------------------------------ > > >>>>>> Learn Graph Databases - Download FREE O'Reilly Book > > >>>>>> "Graph Databases" is the definitive new guide to graph databases > and their > > >>>>>> applications. Written by three acclaimed leaders in the field, > > >>>>>> this first edition is now available. Download your free book > today! > > >>>>>> > > >>>>>> http://p.sf.net/sfu/NeoTech > > >>>>>> > > >>>>>> > > >>>>>> _______________________________________________ > > >>>>>> Shark-project-user mailing list > > >>>>>> > > >>>>>> Shark-project-user@... > > >>>>>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > > >>>>> > > >>>>> > > >>>>> > > >>>>> > ------------------------------------------------------------------------------ > > >>>>> Learn Graph Databases - Download FREE O'Reilly Book > > >>>>> "Graph Databases" is the definitive new guide to graph databases > and their > > >>>>> applications. Written by three acclaimed leaders in the field, > > >>>>> this first edition is now available. Download your free book today! > > >>>>> > > >>>>> http://p.sf.net/sfu/NeoTech > > >>>>> > > >>>>> > > >>>>> _______________________________________________ > > >>>>> Shark-project-user mailing list > > >>>>> > > >>>>> Shark-project-user@... > > >>>>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > ------------------------------------------------------------------------------ > > >>>> Learn Graph Databases - Download FREE O'Reilly Book > > >>>> "Graph Databases" is the definitive new guide to graph databases > and their > > >>>> applications. Written by three acclaimed leaders in the field, > > >>>> this first edition is now available. Download your free book today! > > >>>> > > >>>> http://p.sf.net/sfu/NeoTech > > >>>> > > >>>> > > >>>> _______________________________________________ > > >>>> Shark-project-user mailing list > > >>>> > > >>>> Shark-project-user@... > > >>>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > > >>> > > >>> > > >>> > > >>> > > >>> > > >>> > ------------------------------------------------------------------------------ > > >>> Learn Graph Databases - Download FREE O'Reilly Book > > >>> "Graph Databases" is the definitive new guide to graph databases and > their > > >>> applications. Written by three acclaimed leaders in the field, > > >>> this first edition is now available. Download your free book today! > > >>> > > >>> http://p.sf.net/sfu/NeoTech > > >>> > > >>> > > >>> _______________________________________________ > > >>> Shark-project-user mailing list > > >>> > > >>> Shark-project-user@... > > >>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > > >> > > > > > > > ------------------------------------------------------------------------------ > > > Start Your Social Network Today - Download eXo Platform > > > Build your Enterprise Intranet with eXo Platform Software > > > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > > > Get Started Now And Turn Your Intranet Into A Collaboration Platform > > > > http://p.sf.net/sfu/ExoPlatform_______________________________________________ > > > Shark-project-user mailing list > > > Shark-project-user@... > > > https://lists.sourceforge.net/lists/listinfo/shark-project-user > > > > > > > > > ```
 Re: [Shark-project-user] Hive Internal Error From: oswin krause - 2014-04-30 07:32:12 Attachments: Message as HTML ```Hi, Sorry to say this, but you sent your mail to the wrong list. We are shark, the C++ machine learning library http://image.diku.dk/shark/ Regards, Oswin On 30.04.2014 08:20, NehaS Singh wrote: > > Hi, > > I am using shark 0.9.1 and spark0.9.1.Hive version which I am using is > cdh4.6.I have made the required changes in shark.env.sh > > But when I try to create table in shark ,it is throwing the following > error > > Hive Internal Error: java.util.NoSuchElementException(null) > > Regards, > > Neha Singh > > > ------------------------------------------------------------------------ > The contents of this e-mail and any attachment(s) may contain > confidential or privileged information for the intended recipient(s). > Unintended recipients are prohibited from taking action on the basis > of information in this e-mail and using or disseminating the > information, and must notify the sender and delete it from their > system. L&T Infotech will not accept responsibility or liability for > the accuracy or completeness of, or the presence of any virus or > disabling code in this e-mail" > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > > > _______________________________________________ > Shark-project-user mailing list > Shark-project-user@... > https://lists.sourceforge.net/lists/listinfo/shark-project-user ```
 [Shark-project-user] Hive Internal Error From: NehaS Singh - 2014-04-30 06:20:59 Attachments: Message as HTML ```Hi, I am using shark 0.9.1 and spark0.9.1.Hive version which I am using is cdh4.6.I have made the required changes in shark.env.sh But when I try to create table in shark ,it is throwing the following error Hive Internal Error: java.util.NoSuchElementException(null) Regards, Neha Singh ________________________________ The contents of this e-mail and any attachment(s) may contain confidential or privileged information for the intended recipient(s). Unintended recipients are prohibited from taking action on the basis of information in this e-mail and using or disseminating the information, and must notify the sender and delete it from their system. L&T Infotech will not accept responsibility or liability for the accuracy or completeness of, or the presence of any virus or disabling code in this e-mail" ```
 Re: [Shark-project-user] MO-CMA-ES creating only invalid offsprings From: François-Michel De Rainville - 2014-04-28 13:17:52 Attachments: Message as HTML ```I agree, there is not much development teams out there that respond that quickly to bug reports! You shouldn't be embarassed, I totally understand that Shark is going a major refactoring and keeping things working is often very difficult. I might have seemed impatient, but it is not the case, as you probably have noticed english is not my primary language, I may have used too strong words for what I meant. I'll continue testing and reporting stuff if I find something else. Isn't it what open source is made for? :) Thanks again, Shark 3.0 looks very promising! Cheers, François-Michel 2014-04-28 4:18 GMT-04:00 Christian Igel : > Dear François-Michel, > > Thank you for your feedback. Oswin was so kind to quickly apply a fix. > > > I'm still trying to figure out the difference between your > implementation and mine, and now (version 3234) it seems that the > MOCMASimple tutorial gives all NaN individuals in 30D (tested for ZDT1 and > ZDT2) after a couple of generations. I tracked the bug up to an all 0 > evolution path, but I don't have much time to investigate any further. > > > > Thank you, your input has been very valuable. > Please try again and contact us if you get results that are strange and/or > do not match the published results. > > > Would you know by any chance the version number used for the paper > experiment? > > > > Unfortunately, that's difficult. Which paper are you referring to? > I programmed the MO-CMA-ES used in the Evolutionary Computation paper. I > added the code to Shark 2 and did not touch it afterwards (except for the > 1+1 covariance matrix update). > Now, we rewrote Shark and the Shark 2 code is not in the current trunk. I > would not rule out that I/one could dig out the Shark code from 2006/2007 > and compile it - but this sounds really scary. > Anyway, the current version of the MO-CMA-ES algorithm is the one > published by Thomas Voss et al. at GECCO. Thomas, who left academia, > developed his own code based on mine parallel to the work on Shark 3. The > Shark 3 MO-CMA-ES code is based on Thomas' code, kindly cleaned and adopted > by Oswin. > > To make a long story short, I am embarrassed that you cannot reproduce the > results from the papers with the current version out of the box. The best > and most sustainable solution is to repair and improve the Shark 3 version > until it works at least as good as it did before. However, we are currently > not working with the MO-CMA-ES, thus, we need feedback about bugs and > strange behavior. > > Kind regards, > Christian > > > > > > > > > Best regards, > > François-Michel > > > > > > Le 2014-04-16 à 14:30, oswin krause a écrit : > > > >> Hi, > >> > >> arrgh. I looked at this code at least 10 times today, every time > saying: "of course its min"...it has to be max :( > >> > >> my local version is working now, i found two other (albeit much > smaller) bugs and added a few unit tests. The MOCMA computes the > hypervolume on ZDT1 with 2 objectives and mu=10 up to a difference 0.002 > compared to the results here: > >> > >> http://www.tik.ee.ethz.ch/sop/download/supplementary/testproblems/zdt1/ > >> > >> I will upload the fixes tomorrow, i first want to make the unit tests a > bit better before i tell you that this version is now reliable :) > >> > >> > >> On 16.04.2014 20:10, François-Michel De Rainville wrote: > >>> I may be completely lost, but it looks like the > HypervolumeIndicator::leastContributor function does its std::min_element > (on i) on the hypervolume of the point sets A\A[i] instead of the > min_element of the differences of hypervolumes between A and A\A[i]. > >>> > >>> Best regards > >>> > >>> > >>> 2014-04-16 9:13 GMT-04:00 François-Michel De Rainville < > f.derainville@...>: > >>> While trying to find the source of the bug I've found something > strange in PenalizingEvaluator. When an individual is valid it is evaluated > twice shouldn't there be an "else" clause to the if( feasible )? > >>> > >>> > >>> 2014-04-16 3:19 GMT-04:00 oswin krause < > oswin.krause@...>: > >>> > >>> Hi François-Michel, > >>> > >>> Thanks for another report. I think i found out what might cause this > issue (a NaN goes a long way...). Could you for now try to declare the > MOCMA as: > >>> > >>> shark::MOCMA mocma; > >>> mocma.notionOfSuccess() = shark::MOCMA::IndividualBased; > >>> > >>> This should give me time to fix things. > >>> could you tell me whether this change leads to the correct results in > your computations? thanks! > >>> > >>> Regards, > >>> Oswin > >>> > >>> > >>> On 15.04.2014 15:26, François-Michel De Rainville wrote: > >>>> With version 3207 I have a segmentation fault when setting the DTLZ2 > problem to 3 objectives and 30 variables (I haven't touched anything else). > GDB backtrace is : > >>>> > >>>> Program received signal SIGSEGV, Segmentation fault. > >>>> 0x000000000046bb2e in > shark::HypervolumeCalculator::stream::view_reference > > >, > std::allocator::view_reference > > > > >, shark::FitnessExtractor, shark::blas::vector > > (this=0x7fffffffda10, regionLow=..., regionUp=..., points=..., > >>>> extractor=..., split=0, cover=-nan(0x8000000000000)) > >>>> at > /gel/usr/fmder1/dev/Shark/include/shark/Algorithms/DirectSearch/HypervolumeCalculator.h:409 > >>>> 409 if( extractor( points[i] )[piles[i]] < trellis[piles[i]] ) { > >>>> > >>>> > >>>> Regards, > >>>> François-Michel > >>>> > >>>> > >>>> 2014-04-15 4:17 GMT-04:00 oswin krause < > oswin.krause@...>: > >>>> Hi, > >>>> > >>>> i found one bug already that I fixed. This is a bit unfortunate as it > is actually a bug in LinAlg which I will fix later (no, nothing important > as long as you don't multiply double values with integer vectors...). We > now have a test case for the penalizedEvaluator component. I will upload it > later today. I think this accounts fully for the bug given that the penalty > value is 1.e-6. and 3917*1e-6 <1 and thus the penalty will be 0. > >>>> > >>>> Thanks again for the report. I will now take a closer look at the > single parts :) hopefully i did not mess up more during the rewrite(it was > supposed to remove bugs, not adding them). > >>>> > >>>> > >>>> On 14.04.2014 15:03, oswin krause wrote: > >>>>> Hi, > >>>>> > >>>>> thanks for your report. We have recently rewritten the whole package > and it might be that we created a new set of errors during its > implementation. We are currently in the process of testing it. Could you > try an older svn version? especially revision 3128 should be okay. > >>>>> > >>>>> Thanks again for the report and sorry for the inconvenience. > >>>>> > >>>>> On 14.04.2014 14:54, François-Michel De Rainville wrote: > >>>>>> Sorry for double posting, but I think something went wrong with the > last post. I received a confirmation number from the mailing list but when > I click on it, it tells me it is invalid. > >>>>>> > >>>>>> ---- > >>>>>> > >>>>>> Dear Shark developers, > >>>>>> > >>>>>> I was trying to implement MO-CMA-ES for DEAP[1] and I encountered a > very strange behaviour. Based on the implementation of the 2010 GECCO paper > by T. Voss, N. Hansen and C. Igel and the test problems presented (zdt1 in > 30D for example), the strategy is unable to generate a single valid > individual during the complete evolution process. Moreover, the average > distance between the individual and its closest feasible version grows > above 1000 after 250 generations. > >>>>>> > >>>>>> (Note that I use the hypervolume computation by Simon Wessing in > Python) > >>>>>> > >>>>>> Then I decided to try the implementation available in Shark and > fumbled on the exact same problem. In the MOCMA.h file I added a simple > counter*. I also changed the function in the original MOCMASimple.tpp to > zdt1 in 30 dimensions (zdt1.setNumberOfVariables(30)) and printed the > counter on each generation. The end result is an invalid count of 24800 > (which is exactly the number of created offsprings) and the average > distance is 3917.00. > >>>>>> > >>>>>> Another thing, I found that a test for equality between the values > in penalizedFitness and unpenalizedFitness returns always true (equal) even > for invalid individuals. I don't know if it is the origin of the above > described behaviour. > >>>>>> > >>>>>> * Here is the code : https://gist.github.com/fmder/10287805 > >>>>>> > >>>>>> [1] deap.gel.ulaval.ca > >>>>>> > >>>>>> Cheers, > >>>>>> François-Michel > >>>>>> > >>>>>> > >>>>>> > >>>>>> > ------------------------------------------------------------------------------ > >>>>>> Learn Graph Databases - Download FREE O'Reilly Book > >>>>>> "Graph Databases" is the definitive new guide to graph databases > and their > >>>>>> applications. Written by three acclaimed leaders in the field, > >>>>>> this first edition is now available. Download your free book today! > >>>>>> > >>>>>> http://p.sf.net/sfu/NeoTech > >>>>>> > >>>>>> > >>>>>> _______________________________________________ > >>>>>> Shark-project-user mailing list > >>>>>> > >>>>>> Shark-project-user@... > >>>>>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > >>>>> > >>>>> > >>>>> > >>>>> > ------------------------------------------------------------------------------ > >>>>> Learn Graph Databases - Download FREE O'Reilly Book > >>>>> "Graph Databases" is the definitive new guide to graph databases and > their > >>>>> applications. Written by three acclaimed leaders in the field, > >>>>> this first edition is now available. Download your free book today! > >>>>> > >>>>> http://p.sf.net/sfu/NeoTech > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> Shark-project-user mailing list > >>>>> > >>>>> Shark-project-user@... > >>>>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > >>>> > >>>> > >>>> > >>>> > >>>> > ------------------------------------------------------------------------------ > >>>> Learn Graph Databases - Download FREE O'Reilly Book > >>>> "Graph Databases" is the definitive new guide to graph databases and > their > >>>> applications. Written by three acclaimed leaders in the field, > >>>> this first edition is now available. Download your free book today! > >>>> > >>>> http://p.sf.net/sfu/NeoTech > >>>> > >>>> > >>>> _______________________________________________ > >>>> Shark-project-user mailing list > >>>> > >>>> Shark-project-user@... > >>>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > >>> > >>> > >>> > >>> > >>> > >>> > ------------------------------------------------------------------------------ > >>> Learn Graph Databases - Download FREE O'Reilly Book > >>> "Graph Databases" is the definitive new guide to graph databases and > their > >>> applications. Written by three acclaimed leaders in the field, > >>> this first edition is now available. Download your free book today! > >>> > >>> http://p.sf.net/sfu/NeoTech > >>> > >>> > >>> _______________________________________________ > >>> Shark-project-user mailing list > >>> > >>> Shark-project-user@... > >>> https://lists.sourceforge.net/lists/listinfo/shark-project-user > >> > > > > > ------------------------------------------------------------------------------ > > Start Your Social Network Today - Download eXo Platform > > Build your Enterprise Intranet with eXo Platform Software > > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > > Get Started Now And Turn Your Intranet Into A Collaboration Platform > > > http://p.sf.net/sfu/ExoPlatform_______________________________________________ > > Shark-project-user mailing list > > Shark-project-user@... > > https://lists.sourceforge.net/lists/listinfo/shark-project-user > > > ```
 [Shark-project-user] Fwd: Re: MO-CMA-ES creating only invalid offsprings From: oswin krause - 2014-04-27 23:19:44 Attachments: Message as HTML ```forwarding as the old was witheld by source forge because of length. -------- Original Message -------- Subject: Re: [Shark-project-user] MO-CMA-ES creating only invalid offsprings Date: Mon, 28 Apr 2014 01:17:19 +0200 From: oswin krause To: Francois-Michel De Rainville CC: shark-project-user@... Hi, i don't know which revision this was, and not even which shark version, i did not write the paper :). Maybe Christian can help? I have just deactivated the fast covariance update as there was a numeric instability which lead to a division by 0. Now it seems to be stable, albeit slower (but this should be roughly the same performance as in the paper). On 27.04.2014 19:20, Francois-Michel De Rainville wrote: > Hi again, > > I'm still trying to figure out the difference between your > implementation and mine, and now (version 3234) it seems that the > MOCMASimple tutorial gives all NaN individuals in 30D (tested for ZDT1 > and ZDT2) after a couple of generations. I tracked the bug up to an > all 0 evolution path, but I don't have much time to investigate any > further. > > Would you know by any chance the version number used for the paper > experiment? > > Best regards, > François-Michel > > ```

Showing results of 355

1 2 3 .. 15 > >> (Page 1 of 15)