Thread: RE: [Algorithms] Depth of Field and Smoke Blurs
Brought to you by:
vexxed72
From: Willem H. de B. <Wi...@mu...> - 2002-02-27 10:56:47
|
"Obviously you need to convert the zbuffer into 8bit whilst keeping the range intact." On PS2 we convert the zbuffer into one big 32-bit RGBA texture, and draw the 8-bit green colour component into the alpha channel of our frame- buffer [bilinear filtering off]. The green channel proved to be the most useful one for depth-of-field stuff. We then scale down our framebuffer to a quarter of its original area [half-size offscreen buffer blit with bilinear filtering on] and draw it on top of the original framebuffer with destination alpha testing on. Willem MuckyFoot Goat Boy -----Original Message----- From: Paul Firth [mailto:pf...@at...] Sent: 27 February 2002 10:33 To: GDA...@li... Subject: Re: [Algorithms] Depth of Field and Smoke Blurs MH...@bl... wrote: > I am very interested in supporting "Depth of Field" blur on video cards > without shaders. I am trying to understand what is the basic algorithm > behind creating effects like "Depth of Field" blur and smoke blurs, as far > as having an incredibly quick render time. I imagine when it comes to best > performance, there are only a few ways to do it; i am not sure if I should > use special filter, dynamic texture, or what. > > How do you do it? On consoles where you can manipulate the z buffer in funky ways: 1) Create a blury version of the frame buffer 2) Get the z-buffer into the alpha channel of the frame buffer 3) Using dest alpha, draw the blury version over the chrisp version If you want focus effects you could* use a CLUT to manipulate the alpha channel. Obviously you need to convert the zbuffer into 8bit whilst keeping the range intact. * On certain consoles. Cheers, Paul. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Willem H. de B. <Wi...@mu...> - 2002-02-27 11:22:45
|
>I presume you have a 32bit zbuffer then? No, it's 16-bit. THere's actually quite a lengthy discussion about the whole DOF trick on the ps2 newsgroups, which I know you have access to :) >Don't you find that green wraps around quite quickly? Oh for a high >resolution alpha channel ;-) Just a lot of mucking about, really. Green just proved to be the most useful in our game. There is no clear-cut explanation for it. :) Willem MuckyFoot Goat Boy -----Original Message----- From: Paul Firth [mailto:pf...@at...] Sent: 27 February 2002 11:11 To: GDA...@li... Subject: Re: [Algorithms] Depth of Field and Smoke Blurs "Willem H. de Boer" wrote: > "Obviously you need to convert the zbuffer into 8bit whilst keeping the > range > intact." > > On PS2 we convert the zbuffer into one big 32-bit RGBA texture, and draw > the 8-bit green colour component into the alpha channel of our frame- > buffer [bilinear filtering off]. I presume you have a 32bit zbuffer then? > The green channel proved to be the most useful one for depth-of-field > stuff. We then scale down our framebuffer to a quarter Don't you find that green wraps around quite quickly? Oh for a high resolution alpha channel ;-) Cheers, Paul. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Martin G. <bz...@wi...> - 2002-02-27 13:31:36
|
> Just a lot of mucking about, really. Green just proved to be > the most useful in our game. There is no clear-cut explanation > for it. :) Maybe because the human eye is more sensitive to the green channel than to the red/blue. Cheers, Martin |
From: Tom F. <to...@mu...> - 2002-02-27 13:45:43
|
This is the green channel of the Z-buffer-read-as-a-texture. So nothing to do with perception at all. The red/green/blue selection just determines which particular set of bits we use out of the 16 that the Z buffer has to offer. At no time is the colour actually shown on the screen. Which channel you use will depend on exactly whay you near & far clip planes are. Ours happened to work out so that green was the one most useful for depth-of-focus. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Martin Gladnishki [mailto:bz...@wi...] > Sent: 27 February 2002 14:30 > To: GDA...@li... > Subject: Re: [Algorithms] Depth of Field and Smoke Blurs > > > > Just a lot of mucking about, really. Green just proved to be > > the most useful in our game. There is no clear-cut explanation > > for it. :) > > Maybe because the human eye is more sensitive to the green > channel than to > the red/blue. > > Cheers, > Martin |
From: Jon W. <hp...@mi...> - 2002-02-27 17:08:43
|
I have no idea what the function is that maps a 16-bit depth value to a 32-bit color value, but suppose it just adds zeros to the high halfword. Then the high byte of the depth buffer would end up in the green byte of the 32-bit color value, and thus it would be quite natural that the green one was the one you wanted, as it will then basically be the 8-bit version of the full depth buffer. Of course, the mapping may work totally differently, in which case I'm interested in hearing about that mapping (but probably in another thread :-) Cheers, / h+ > This is the green channel of the Z-buffer-read-as-a-texture. So nothing to > do with perception at all. The red/green/blue selection just determines > which particular set of bits we use out of the 16 that the Z buffer has to > offer. At no time is the colour actually shown on the screen. |
From: <MH...@bl...> - 2002-02-27 16:29:48
|
So is using the ZBuffer as a Texture to create Depth of Field the same technique used for Smoke Blurs, that are regularly seen in Driving Games? |
From: Mark L. <mar...@ch...> - 2002-02-28 06:02:29
|
> > So is using the ZBuffer as a Texture to create Depth of Field the same > technique used for Smoke Blurs, that are regularly seen in Driving Games? > If you are talking about DOF on PC hardware then it might not be so simple to retrieve the zbuffer. Using the hi-res/lo-res DOF technique, you could alway resort to rendering your depth interpolator directly into the alpha channel of your hi-res image using a 1D projective texture map. This can be can be done without using any custom shaders but will use up an extra texture stage. |
From: Adrian P. <adr...@mi...> - 2002-02-28 00:43:47
|
The ZBuffer trick is probably useful in finding out where to place the heat wave effect or how much to do it I suppose, but the actual effect itself doesn't use the zbuffer. On the PS2, (I'm extrapolating based on my limited knowledge of the hardware here) they most likely source the framebuffer using a squarish wiggling triangle mesh back on top of the framebuffer. On the Xbox/PC you do it with a quad with a dependent dot product texture program on it. I.E. you source the framebuffer and have an animated 'wiggle' texture that you use to sample wiggled points from the framebuffer and render it back on. Both schemes work really well, and the PS2 way is probably more efficient in this particular application since wiggling vertices is cheaper than animating a texture. If you're doing anything constant (like the /other/ New Lens Flare, the water-on-the-camera effect) dot product programs will probably give you fewer artifacts than the triangle mesh way -Cuban @bungie.com > -----Original Message----- > From: MH...@bl... [mailto:MH...@bl...] > Sent: Wednesday, February 27, 2002 8:37 AM > To: GDA...@li... > Subject: RE: [Algorithms] Depth of Field and Smoke Blurs >=20 > So is using the ZBuffer as a Texture to create Depth of Field the same > technique used for Smoke Blurs, that are regularly seen in Driving Games? >=20 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 |
From: Tom F. <to...@mu...> - 2002-02-28 14:46:23
|
Yes, that works well. The way I did my depth-of-field effect was to render the scene a second time into a small (256x128) A8R8G8B8 texture, with the alpha channel based on the Z value using either a vertex shader, or a 1D planar-mapped texture if using the FFP. Alpha = 0 at the focus point, fading to 1 closer and further away. Then blend that texture over the main view, using the texture's alpha with a standard SRCLAPHA:INVSRCALPHA. Result - the in-focus depth is razor-sharp, and everything else is foggy. You have to be careful with alpha-blended effects, but otherwise it works well. And yes, you can use lower-LoD shapes for the render to the texture. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Garett Bass [mailto:gt...@st...] > Sent: 28 February 2002 14:31 > To: GDA...@li... > Subject: RE: [Algorithms] Depth of Field and Smoke Blurs > > > I'm curious, on the PC, could you create the same or similar > destination > alpha effect by rendering the scene into destination alpha in > black (or > white) with white (or black) depth-based fog? I'm thinking > the color of > each pixel in the range of black to white would be equivalent > to z-based > alpha. I'm not familiar with the workings of destination > alpha, so I don't > really have any idea whether this would work or whether it > could be done in > some usefull parallel to normal rendering to make it even feasible. > > Perhaps you could reduce the LOD ranges dramatically when > using depth of > field, or even totally alter the LOD regions to minimize the > tesselation in > unfocussed areas to make up for the extra draw time the > effect requires? > > Garett Bass > > > -----Original Message----- > > > On PS2 we convert the zbuffer into one big 32-bit RGBA > texture, and draw > the 8-bit green colour component into the alpha channel of our frame- > buffer [bilinear filtering off]. > The green channel proved to be the most useful one for depth-of-field > stuff. We then scale down our framebuffer to a quarter > of its original area [half-size offscreen buffer blit with bilinear > filtering on] and draw it on top of the original framebuffer > with destination alpha testing on. > > Willem > MuckyFoot Goat Boy > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Willem H. de B. <Wi...@mu...> - 2002-02-28 14:47:24
|
I think you could do that. Render the scene a second time into an offscreen buffer in black (vertex colours set to 0 perhaps, although that would mean touching your vertex buffers. Maybe use a one-by-one black texture instead), with white depth-based fog. But, after you've done that, how would you go about blitting that scene into the frame buffer's destination alpha channel? I haven't done any PC stuff lately, so I don't know whether or not it's possible to mask/shift a colour channel into the destination alpha channel before blitting it.. Prolly not. The PS2 version uses some very clever hardware hacks to get the job done. Maybe you could do some clever pixel shader tricks. So instead of rendering the scene into an offscreen buffer, render it on top of the existing one and have a pixel shader shove one of the colour channels of the black+white render into the destination alpha channel of the framebuffer... This is just speculation on my behalf. I have little to no experience when it comes to pixel shader coding. Cheers, Willem MuckyFoot Goat Boy -----Original Message----- From: Garett Bass [mailto:gt...@st...] Sent: 28 February 2002 14:31 To: GDA...@li... Subject: RE: [Algorithms] Depth of Field and Smoke Blurs I'm curious, on the PC, could you create the same or similar destination alpha effect by rendering the scene into destination alpha in black (or white) with white (or black) depth-based fog? I'm thinking the color of each pixel in the range of black to white would be equivalent to z-based alpha. I'm not familiar with the workings of destination alpha, so I don't really have any idea whether this would work or whether it could be done in some usefull parallel to normal rendering to make it even feasible. Perhaps you could reduce the LOD ranges dramatically when using depth of field, or even totally alter the LOD regions to minimize the tesselation in unfocussed areas to make up for the extra draw time the effect requires? Garett Bass -----Original Message----- On PS2 we convert the zbuffer into one big 32-bit RGBA texture, and draw the 8-bit green colour component into the alpha channel of our frame- buffer [bilinear filtering off]. The green channel proved to be the most useful one for depth-of-field stuff. We then scale down our framebuffer to a quarter of its original area [half-size offscreen buffer blit with bilinear filtering on] and draw it on top of the original framebuffer with destination alpha testing on. Willem MuckyFoot Goat Boy _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Johan H. <jh...@mw...> - 2002-03-01 07:56:03
|
sorry, to bring up an old thread... what exactly was the conclusion of the previos thread about gamma ? Do you set your monitor gamma to 1, and pre bake gamma into your textures, or do you try to pursuade users to set there monitor gamma up? (is 2.2 a generally acepted value for this?) dont you get blending artifatcs if you prebake gamma into textures ? johan |
From: Charles B. <cb...@cb...> - 2002-03-01 17:31:13
|
Yeah, do NOT gamma correct textures. If you want to get it right, you need to know what gamma the textures were made at (including the "effective gamma" from the lighting environment where your artists work), and then you need to know the gamma of the monitor + lighting environment of the player. Then, you can adjust the gamma to just compensate for this difference. For example, most TV shows are shot at a Gamma of 1.3 ; then the TV applies a Gamma of 1.7 ; this is based on an expected average environment gamma of 2.2 in the living room. Anyway, for utility textures like height maps, normal maps, light maps you should have a Gamma of 1.0 in the texture; if they were hand painted by an artist with some gamma != 1.0 , then you should correct it. And, if you do get it all correct, no one will care, and you're better off just providing a "brightness" setting in your display options... At 09:55 AM 3/1/2002 +0200, Johan Hammes wrote: >sorry, to bring up an old thread... >what exactly was the conclusion of the previos thread about gamma ? Do you >set your monitor gamma to 1, and pre bake gamma into your textures, or do >you try to pursuade users to set there monitor gamma up? (is 2.2 a generally >acepted value for this?) > >dont you get blending artifatcs if you prebake gamma into textures ? > >johan > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >Archives: >http://sourceforge.net/mailarchive/forum.php?forum_id=6188 ---------------------------------------------------- Charles Bloom cb...@cb... www.cbloom.com |
From: Johan H. <jh...@mw...> - 2002-03-06 11:56:25
|
> For example, most TV shows are shot at a Gamma of 1.3 ; then the TV applies a > Gamma of 1.7 ; this is based on an expected average environment gamma of 2.2 > in the living room. so what dont I understand about the above? I though gamma was a way to correct the nonliniarities of CRT displays. What does it have to do with the living room? also, if TV's can adjust gamma, are there monitors that can adjust gamma as well? johan |
From: Stephen J B. <sj...@li...> - 2002-03-06 14:51:58
|
On Wed, 6 Mar 2002, Johan Hammes wrote: > > For example, most TV shows are shot at a Gamma of 1.3 ; then the TV > applies a > > Gamma of 1.7 ; this is based on an expected average environment gamma of > 2.2 > > in the living room. > > so what dont I understand about the above? I though gamma was a way to > correct the nonliniarities of CRT displays. What does it have to do with the > living room? Your perception of the screen depends on the brightness of the room lighting. > also, if TV's can adjust gamma, are there monitors that can adjust gamma as > well? Not that I'm aware of. TV's have a fixed, hardwired, gamma kludge - it's not 'adjustable'. The net result is just that TV's and monitors require different amounts of gamma correction - but then since no two monitors (or TV's) are exactly alike and the gamma of a CRT changes as it ages - you pretty much have to hope that people periodically calibrate the gamma of their screens (or that CRT's become obsolete and we can sweat about the non-linearities of LCD's, Plasma Panels, IBM's organic LCD thingies...) Note also that people talk about "the gamma of <some device>" - when they should really be considering the gamma of red, green and blue independantly. In an ideal world, scanners, cameras, displays and print devices would have no brightness or contrast controls - just a set of R,G,B 'gamma' controls that would be factory set to allow a linear input to generate a perfect picture in 'typical' lighting conditions - and which would only have to be adjusted once a year to account for the aging of the phosphors. Then we could make it a criminal offense to put gamma correction into image processing software! :-) However, this is *far* from an ideal world. <sigh> ---- Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://www.sjbaker.org |
From: Paul F. <pf...@at...> - 2002-02-27 11:07:49
|
"Willem H. de Boer" wrote: > "Obviously you need to convert the zbuffer into 8bit whilst keeping the > range > intact." > > On PS2 we convert the zbuffer into one big 32-bit RGBA texture, and draw > the 8-bit green colour component into the alpha channel of our frame- > buffer [bilinear filtering off]. I presume you have a 32bit zbuffer then? > The green channel proved to be the most useful one for depth-of-field > stuff. We then scale down our framebuffer to a quarter Don't you find that green wraps around quite quickly? Oh for a high resolution alpha channel ;-) Cheers, Paul. |
From: Garett B. <gt...@st...> - 2002-02-28 14:31:29
|
I'm curious, on the PC, could you create the same or similar destination alpha effect by rendering the scene into destination alpha in black (or white) with white (or black) depth-based fog? I'm thinking the color of each pixel in the range of black to white would be equivalent to z-based alpha. I'm not familiar with the workings of destination alpha, so I don't really have any idea whether this would work or whether it could be done in some usefull parallel to normal rendering to make it even feasible. Perhaps you could reduce the LOD ranges dramatically when using depth of field, or even totally alter the LOD regions to minimize the tesselation in unfocussed areas to make up for the extra draw time the effect requires? Garett Bass -----Original Message----- On PS2 we convert the zbuffer into one big 32-bit RGBA texture, and draw the 8-bit green colour component into the alpha channel of our frame- buffer [bilinear filtering off]. The green channel proved to be the most useful one for depth-of-field stuff. We then scale down our framebuffer to a quarter of its original area [half-size offscreen buffer blit with bilinear filtering on] and draw it on top of the original framebuffer with destination alpha testing on. Willem MuckyFoot Goat Boy |