I really like to render with apophysis and thought about something. Since I have 2 spare or less used dual core pcs I thought if it would be possible that these could run Apophysis in some kind of slave mode so a main instance can send to-render-data to these over network so these pcs can render parts of the fractal the main instance is rendering.
I'll try to better explain it:
Imagine you have a big flame and it would be rendered in 4 parts: upper left and right and lower left and right.
Now if you would have these slaves the main instance could send the to-render data of upper right (part 2) to pc2 and the to-render data of lower left (part 3) to pc3. While these render their parts the main instance render upper left (part 1) and if the other pcs would take longer to finish their parts it could start with lower right (part 4) or split that one up too.
That way you could render bigger flames much faster in case you have spare pcs in your network.
Would that be possible to realize and what do you guys think about this idea?
Network rendering is probably being suggested at least once a day and I personally think it is a good idea to workaround the (still) long render times. But a) I guess at the moment, the devs simply don't have enough time to provide a network renderer (and if there is one, it should rock!) and b) the focus should rather go towards optimizing the render process for fancy new stuff like GPUs, 64 bit systems, multicores and vast amounts of RAM ;)
I'm just wondering (trying to wrap my brain around the concept) … you might be able to do this if you divide, let's say, the screen in four parts, render them individually and then assemble them back with a paint program.
Another (and better IMHO) idea is to get yourself an 8-core CPU instead.
That's what I'm aiming for, I'm beating up my 4-core real bad ehehehe ;)
And again, and again…
You CAN'T divide a flame and render them separately. All you CAN do is to render the flame with various seed entry point and mix the results. The problem araising here is it should be mixed with the whole buffer and not with the final images. But sharing this buffer over the network is near to impossible - its size is an overkill.
I'm new here, but what size do you mean? I think that rendering units would have to share RGBA image (where alpha channel contains frequency). Could you provide some numbers in oder to imagine the problem? What size of buffer is usually used? What contains each point in buffer (size of point)? How long takes, let's say, 1024 iterations over usual buffer? What is usually number of iterations?
What about dividing all possible start points by M - number of rendering units. Each rendering unit would be responsible to do N iterations from all its start points. As a result rendering unit would create RGBA images + next start point (position, color). RGBA image may be mixed with previous result and end point would be used to ask responsible for this point unit for RGBA image + next start point. It looks linear, but rendering units render RGBA images for all its points, so at the moment of the request, RGBA image and next start point may be already rendered. All pre-rendered buffers takes space, but they may be compressed if they are going to be sent over net.
I hope I'm clear enough. It is again about dividing rendering of flame :-), but what do you think?
We don't need suggestions, all the devs completely understand how rendering works. If you do an implementation or have code to offer, then people might respond.
All I think is I can't imagine sharing few gigabytes of the buffer only for minimal speed-up (yes, minimal, as this method still can't give you a full parallel proccess). You see: the code is open, if you know how to do it - code it and share it with us :) Our efforts right now are in some other areas than distributed rendering.
Log in to post a comment.