> My biggest problem is finding a good way to extract intensity information
> from the scene. The simplest method is to read the pixels back from the
> frame, calculate the statistics I've mentioned above and scale the
> lightmaps accordingly (which means everything gets a light map ... even the
> sky). Of course, reading back the frame buffer into system memory to
> perform the calculation is slow at best.
Another idea I had for game-like application was to precompute the
adaptation level (i.e. average viewed luminance) by sampling the scene.
It can be stored adaptatively (basically determining where a huge step
will occur). I haven't had time to try, so I relied on reading the
frame-buffer (but using the log of colors in a first pass to be able to
treat the whole dynamic range)
And yes, the siggraph paper was far from real time, but my paper at the
workshop
and Scheel's paper at EG are at least interactive (but with additional
cost or restrictions)
> They human eye can see 1.5 log units of light at one moment (log (cd/m^2)),
> but can adapt (mainly neurologically) to over 7 log units of light. Of this
> only one log unit is achieved through the widening of the iris, the rest is
> neurological. To simulate this wide range, you have to simulate the eye in a
> lot more detail.
Well, unfortunately it is a bit more complex. A single neuron of the
human eye has a dynamic range of 1 to 40, but since they all have a
different adaptation state (different gain, different low pass filter,
etc.), the human eye is able to see a static scene with a high dynamic
range.
The fact that we really see well only in the tiny visual field of the
fovea and that our eyes are always in motion does not simplify anything!
This is the big limitation of both the siggraph paper approach and my
approach: they assume that the adaptation state is global in the retina.
--
Fredo Durand, MIT-LCS Graphics Group
NE43-255, Cambridge, MA 02139
phone : (617) 253 7223 fax : (617) 253 4640
http://graphics.lcs.mit.edu/~fredo/
|