Here's a place to suggest, or ask for, any feature that you believe should be part of a parallel processing framework.
I'd love to implement ideas other than mine (which I'm getting short of recently, I'm a victim of brain exhaustion).
Here's an idea. Not neccesarily a feature but if you could demonstrate how to parallelize JOONE (www.jooneworld.com) to speed up training neural networks, that would be a good real-world example of JPPF capabilities.
Green Tea have produced a Gree Tea enabled version of JOONE (http://www.greenteatech.com/)
This is indeed a very interesting idea.
I've been "working" with neural networks for some time, more with an educational purpose than actual research, and that is why I developed JPPF in the first place.
I think JOONE is a great ANN framework, and I do intend to use it, when I can find the time between my day job and JPPF, that is.
I did take a look at their Distributed Training Environment (DTE), and it seems they arrived at the same conclusion regarding the granularity of what can be parallelized, when training ANNs.
Mostly, it's about training separate networks concurrently, which is very coarse.
However, it does fit my work well, I'm using genetic algorithms, or hybrid genetic/standard NN training alogrithm, to evolve and train populations of neural nets. So each net evolution/training can be implemented as a JPPF task and run in parallel with the others, and that works great.
I also tried working at a finer level, like having multiple copies of of a net, each with a distinct subset of the training set, perform the weight updates in parallel (that was for feed forward nets using a variation of the backprop algorithm, RPROP).
The problem in this case was that at the end you always have to sum the weight update values (and other parameters) to compute the actual updates on the whole training set. This definitely more than outweighted what I gained through parallelization.
I've been using nets with a few hundred up to several thousand neurons, and the most nagging issue is that, when you have n neurons in a net, the number of weights will be approximately in O(n^2).
Sorry for the long speech, I just wanted to point out that I do use JPPF myself for neural nets and GAs.
I use JPPF for Travelling Salesman Problem.
John here from the Bay of Plenty Polytechnic, it has been a few years since last contact, I am back using jppf again, now we have 6 quad core machines in a cluster up and running with jppf. Your reference to neural nets gave me the following idea for you to chew on as we are in NZ, jppf has proportional, auto and rl load balancing, so could we use a neural net to do the job sensing by latching patterns for any particular business data set applied to the jppf cluster, so any learnt pattern or similiar pattern it sees when data is fed to the framework could set a bundle size and task scheduling, task mapping to the cores to give the best spread across nodes and a good parallel processing time.
My daughter has recovered from illness that stopped my participation in latter years, now I can come back online as it were.
Kind regards for John in NZ
I'm very happy to hear back from you, and about your daughter's recovered health.
The idea of using neural nets to learn business data pattern and compute the distribution to the nodes has come accross several times, but we never gave it a serious try. This is a non-trivial work - at least for me :) - and I'm not sure exactly how to start. In particular some aspects of the problem give me headaches, for example the fact that some aspects of the problem are unbounded (number of nodes in a grid, tasks in a job), and dynamic (nodes poping up and down at any time) but I'm sure there are ways to consider the problem that allow overcoming these.
In any case, I have registered a feature request for this:
3530997 - Load balancing with neural nets/genetic algorithms
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.