On Thu, Nov 28, 2013 at 9:10 AM, Renk Thorsten <thorsten.i.renk@jyu.fi> wrote:
> Are these values that are viewer position dependent, or are they
> constant for a given altitude and sun angle? I could imagine ways in
> which the calculation could be simplified if you are now able to get
> good results interpolating over polygons; you could also store the
> interpolated values or some component of them in the renderer's G-Buffer.

Sunlight values are treated as vector-valued field dependent on coordinate position (internally in  coordinate system based on (up, sunward, perpendicular to both)), thus

light_diffuse = rgb(x,y,z)
light_ambient = rgb'(x,y,z)

What is known about these fields a priori is that they're smooth and have smooth derivative, vary slowly in (xy) - the full transition from night to full intensity white light is more than 1000 km - and somewhat faster in altitude. There's no dependence on viewer position or view direction.
If I understand correctly, these values are more like a vertex attribute: they change only when the sunlight vector changes. The calculation doesn't even need to be done in a vertex shader: some portion of the terrain vertices could be updated on the CPU at each frame and uploaded to a GPU buffer. 

The slow variation means that for vertices which are a few hundred to a kilometer apart, a linear interpolation approximates the function with very good accuracy.

My understanding is that where to evaluate it is largely a question of the average number of vertices per pixel. In the model shaders, the function is evaluated in the fragment shader because a faraway model with 10.000 vertices collapses into just a few pixels. For scenery without the OSM roads, the situation was such that evaluating it over the mesh vertices was still cheaper than per pixel, implying that there's probably less than one vertex per  pixel under usual conditions. Ultimately, if we had meter-sized terrain resolution, I assume the situation would reverse and evaluating per pixel would be unconditionally cheaper.
Ultimately you want to avoid evaluating many vertices per pixel as it is a waste of resources and can lead to aliasing. 

Well, hence my idea of simply writing a dedicated effect for roads - if they drive vertex count up, then work them per pixel.

I guess deferred rendering can in principle do these computations as part of the geometry pass and buffer them - I was given to understand (by Emilian)  that for Rembrandt as it stands they would have to go into the fragment shader though.
I could see either storing them as a vertex attribute and writing the interpolated value into additional storage in the G-Buffer or doing a lookup in the fragment shader into a texture, based on the calculated position of the pixel.

Tim 

* Thorsten
------------------------------------------------------------------------------
Rapidly troubleshoot problems before they affect your business. Most IT
organizations don't have a clear picture of how application performance
affects their revenue. With AppDynamics, you get 100% visibility into your
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349351&iu=/4140/ostg.clktrk
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel