From: Renk T. <tho...@jy...> - 2012-04-19 08:20:32
|
> What do people think about dynamically scaling the eye candy to meet a > target framerate? Define 'the eye candy' and you'll spot the problem. In general, the performance-driving factors are (not an exhaustive list) 1) visibility range, i.e. number of terrain vertices in the scene 2) cloud visibility range and density, i.e. number of cloud vertices in the scene 3) random vegetation density and visible range 4) number of instructions per vertex the shader has to perform 5) number of instructions per fragment the shader has to perform 6) number of instructions per frame of (aircraft, weather,...) Nasal in the background Trying to scale all of them will likely get you into an unstable loop. 4) and 5) are not continuously scalable, a shader can be on or off, so do we want a flickering snowline based on performance fluctuations? Others people feel should not be scaled - for instance visibility range is a physical property of the world, and if you start on a VFR flight, you shouldn't end up doing an IFR approach beacuse your anti-virus program decided to take just that moment for a scan. I had a control loop for cloud visibility range based on a framerate target (I think that's distributed in 2.4 (?)) - after re-organizing the weather, I didn't put it back in, because I never used it and I never heard anyone say he misses it. It's not particularly difficult to put it back to work, but I never liked its results. It's not so obvious to me what you would really want to downscale in a given situation. * Thorsten |