I'm writing a paper about auto-tuning parallel apps and i'm looking for applications to tune. While benchmark suites like parsec are fine, i want to discuss an example application in detail and i thought about pixie. I have no practical rendering experience apart from rendering some example files with pixie.
I need a pretty scene to render (so if anyone wants to show off in a scientific paper, this is your chance ;)
We developed an auto-tuning tool, which will then try various configurations and try to find the fastest one using various heuristic search algorithms. After skimming the manual the interesting parameters i found are: number of threads and the limit options (gridsize, bucketsize, etc.)
Do you think pixie is a suitable test case for auto-tuning? Any tips about configurations? Scene proposals?
From using these renderers in production for 14 years, I'd say: not really.
Because speed is secondary. First priority is getting an image and getting a good/correct looking image.
Just for example: messing with gridsize can impact quality.
Gridsize is a function of bucketsize and shadingrate. Because gridsize should roughly match bucketsizeX * buckesizeY / shadingrate.
So while your shading performance will increase if you choose larger gridsizes (no surprise here), adjacent grids might jump in actual shading rate where they meet.
The max. gridsize I ever used is 1024 and this can already get tricky, if you're looking at a surface at a grazing angle.
So one thing an autotune tool needed to do was comparing the resulting image with a known good image (say, rendered with 20x20 pixel samples and a shadingrate of 0.1) and see how much it differs.
And someone need to define what "much" is (and this may differ, depending on what is being rendered).
But then you also need to have human look at the image and see if the differences are just noise or actual visual displeasing/noticable artifacts resulting from the choosen parameters.
And this is exactly where the "auto" break down.
It is also the reason why no one else has already written a tool like that. There are probably two dozen other parameters that affect image quality vs speed in various ways in RMan-compliant renderers. There is also the pipeline itself (i.e.: do I render shadows using depth map, deep map, ray-tracing or point based methods, do I use ray-tracing or reflection maps, do I use point-based or ray-traced global illumination etc, etc).
But if the chosen values for all these parameters and modes of operation are acceptable depends on the particular circumstances and the user's opinion (which often is a function of the former, e.g. a looming deadline may get a supervisor to let some images pass as "final" which they would have rejected, otherwise).
Not to discourage you, but I doubt of the benefits of applying such algorithms in this context.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.