Thread: RE: [Algorithms] Perspective shadow maps
Brought to you by:
vexxed72
From: Tom F. <to...@mu...> - 2002-09-25 11:26:18
|
Yes, Thatcher Ulrich has. Check this thread (cunningly titled "Perspective Shadow Maps"): http://sourceforge.net/mailarchive/message.php?msg_id=1514744 There was also someone else on the list who mentioned they had used them. And we'll be using them shortly as well - too many other things to implement first! Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Petr Smilek [mailto:gu...@pr...] > Sent: 24 September 2002 20:50 > To: gda...@li... > Subject: [Algorithms] Perspective shadow maps > > > Hi, > does anybody successfuly implemented and used > perspective shadow maps (except authors) in practice ? > Is there some demo showing them in action available ? > Thank you in advance. > > Petr S. |
From: Tom F. <to...@mu...> - 2002-09-25 16:32:54
|
> -----Original Message----- > From: Steve Legg [mailto:st...@mo...] > > Hello, > > I have also implemented perspective shadow maps (so far only for > directional lights). I'd be interested in hearing how people have > gotten around the two biggest problems, namely: 1) when the light is > parallel to the viewing direction the point light in post-perspective > space ends up very close to the unit cube (requires a very wide > viewing angle which reduces the shadow resolution drastically). And > 2) how to deal with objects behind the camera casting shadows. > > The original paper suggests moving the viewer backwards as a solution > to these problems, however this can end up reducing the shadow > resolution > too much. > > I have a hack which helps a lot with the first problem, but the second > is difficult (especially with objects a long way behind the camera). > Its always possible to just do another pass for these objects but this > reduces the main advantage (in my opinion) of using this method - > Which is that you can draw shadows for everything in a single pass > (at the expense of needing a larger shadow buffer). > > Thanks, > Steve. Since these objects are only casting shadows, can you not just draw them, but clamp their depth values to 0.0 (or 1.0 - depends which ray round you do your stuff). Their actual shadows don't need any depth precision, all you need to know about them is that they are in front of everything else. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. |
From: Thatcher U. <tu...@tu...> - 2002-09-25 17:10:23
|
On Wed, 25 Sep 2002, Tom Forsyth wrote: > > -----Original Message----- > > From: Steve Legg [mailto:st...@mo...] > > > > Hello, > > > > I have also implemented perspective shadow maps (so far only for > > directional lights). I'd be interested in hearing how people have > > gotten around the two biggest problems, namely: 1) when the light is > > parallel to the viewing direction the point light in post-perspective > > space ends up very close to the unit cube (requires a very wide > > viewing angle which reduces the shadow resolution drastically). And > > 2) how to deal with objects behind the camera casting shadows. > > > > The original paper suggests moving the viewer backwards as a solution > > to these problems, however this can end up reducing the shadow > > resolution > > too much. > > > > I have a hack which helps a lot with the first problem, but the second > > is difficult (especially with objects a long way behind the camera). > > Its always possible to just do another pass for these objects but this > > reduces the main advantage (in my opinion) of using this method - > > Which is that you can draw shadows for everything in a single pass > > (at the expense of needing a larger shadow buffer). > > Since these objects are only casting shadows, can you not just draw them, > but clamp their depth values to 0.0 (or 1.0 - depends which ray round you do > your stuff). Their actual shadows don't need any depth precision, all you > need to know about them is that they are in front of everything else. The problem with that is, in order to render the shadows into the scene you do a projective mapping of the shadow buffer... because of the perspective transform used to draw the buffer, there's a difficulty in making the images of those objects to put in the buffer. I.e., those objects are behind the near-clip plane of the shadow-buffer frustum. So you need to clip; otherwise the shadows of those objects will be upside down and/or all screwed up in the shadow buffer. The straightforward hack is to move the projection point of the shadow buffer backwards, until it's behind everything that needs to cast a shadow. However, like Steve says, this compromises the resolution of the buffer, and reduces the benefit of doing the perspective transform. Certainly it works up to a point, and the degradation is fairly smooth, but eventually you get really pixelated shadows. But, maybe there's a clever trick that would work? Somehow use a different transform when drawing the problem objects into the shadow buffer? But, I think that's bound to produce screwy results when you reproject the buffer into the scene. Or, maybe there's a way to exploit the reverse-projection thing. I doubt it though. So multiple shadow buffers, or somehow avoiding the situation, are the only solutions I know of. I think avoidance is feasible if you don't use dynamic lights to shadow the environment, so it's basically just characters that use the shadow buffers. But then again that reduces the appeal. Maybe one perspective shadow buffer, and one conventional shadow buffer could do a decent job for everything? I.e. classify each casting object according to which technique will give better results for the current light & view position. Then add them together on-screen. -T |
From: Steve L. <st...@mo...> - 2002-09-25 17:54:21
|
> > > 2) how to deal with objects behind the camera casting shadows. > > Since these objects are only casting shadows, can you not > just draw them, > > but clamp their depth values to 0.0 (or 1.0 - depends which > ray round you do > > your stuff). Their actual shadows don't need any depth > precision, all you > > need to know about them is that they are in front of > everything else. > > The problem with that is, in order to render the shadows into the > scene you do a projective mapping of the shadow buffer... because of > the perspective transform used to draw the buffer, there's a > difficulty in making the images of those objects to put in the buffer. > I.e., those objects are behind the near-clip plane of the > shadow-buffer frustum. So you need to clip; otherwise the shadows of > those objects will be upside down and/or all screwed up in the shadow > buffer. I think there may be something in what Tom is suggesting though (I have tried a few hacks along this line myself). Points behind the camera end up at coordinates with -w in post-perspective space - I think this is why they get clipped when performing the second projection (the point light camera). If you could somehow clamp them at w=0 before the second projection wouldn't they just end up at the far plane of the unit-cube? (at z=-1 in post-perspective space, the opposite end from the point light). Maybe its possible if you break the transform into two steps? Steve. |
From: Jon W. <hp...@mi...> - 2002-09-26 17:30:06
|
> The problem with that is, in order to render the shadows into the > scene you do a projective mapping of the shadow buffer... because of > the perspective transform used to draw the buffer, there's a > difficulty in making the images of those objects to put in the buffer. > I.e., those objects are behind the near-clip plane of the > shadow-buffer frustum. So you need to clip; otherwise the shadows of > those objects will be upside down and/or all screwed up in the shadow > buffer. I don't quite get it (I just may be dense here). -The objects behind the viewer camera, are still in front of the light, right? -When rendering the shadow buffer, the viewer for that is at the light? -Thus, the objects casting shadows are always in front of the viewer when rendering the buffer. -Modulo precision issues, you could push your shadow map near clip plane, when rendering the buffer, as close to the light source as necessary to avoid clipping out objects. Or you could use depth clamping. -Now, when rendering into the color buffer, you have to do a per-pixel projection from screen space into the shadow buffer. -This projection has to use the inverse _light_ projection matrix. -Thus, all objects in the shadow buffer are guaranteed to be closer than the light at this point. -I concede that I don't have the projective shadow mapping paper in front of me to see if it changes these rules in some fundamental way, but I thought that it didn't; it just "stretched" the shadow buffer on creation. -It does this stretching into a unit cube in screen space, which projects down to a point at the viewer. -Now, let's consider: how does this technique support lights behind the camera? -I thought the idea was to make the shadow map loosely cover the post- projection unit cube, but not fit it exactly to any specific face or area of the cube. -Thus, the projection for the shadow map is actually such that it correctly deals with anything between the light and the scene, that further from the light than whatever your near clipping distance is when rendering the map. -If this is indeed a reasonable set-up, then clamping depth to 0 when rendering the map seems like it should work fine. > the appeal. Maybe one perspective shadow buffer, and one conventional > shadow buffer could do a decent job for everything? I.e. classify > each casting object according to which technique will give better > results for the current light & view position. Then add them together > on-screen. -I thought you needed one shadow buffer per light anyway? -Am I missing something really major here? -Your comments are making me very confused. That's a sad feeling. Cheers, / h+ |
From: Steve L. <st...@mo...> - 2002-09-26 22:37:48
|
> > The problem with that is, in order to render the shadows into the > > scene you do a projective mapping of the shadow buffer... because of > > the perspective transform used to draw the buffer, there's a > > difficulty in making the images of those objects to put in > the buffer. > > I.e., those objects are behind the near-clip plane of the > > shadow-buffer frustum. So you need to clip; otherwise the > shadows of > > those objects will be upside down and/or all screwed up in > the shadow > > buffer. > I don't quite get it (I just may be dense here). > > -The objects behind the viewer camera, are still in front of > the light, right? > -When rendering the shadow buffer, the viewer for that is at > the light? > -Thus, the objects casting shadows are always in front of the > viewer when > rendering the buffer. I think you are correct about these three points. > -Modulo precision issues, you could push your shadow map near > clip plane, > when rendering the buffer, as close to the light source as > necessary to > avoid clipping out objects. Or you could use depth clamping. > > -Now, when rendering into the color buffer, you have to do a per-pixel > projection from screen space into the shadow buffer. > -This projection has to use the inverse _light_ projection matrix. > -Thus, all objects in the shadow buffer are guaranteed to be > closer than > the light at this point. I'm not sure what you mean about needing to use the inverse of the light projection matrix though. Here's the method I use to do directional lights : Matrix view, // World -> viewer camera projection; // Viewer camera projection matrix Matrix vp = view * projection; This matrix maps from world space to post-perspective space. It maps points inside the viewer frustum to the cube (-1,-1,-1)...(1,1,1). Vector4 Light_pos = Vp * light_direction; Light_pos/=light_pos.w; This gives you the position of a point light in post-perspective space that corresponds to the directional light in world space. Now you set up your light camera at this position, point it at (0,0,0) and adjust the angle so it can see the whole cube. You also need to adjust the near and far clipping planes. Next you build the matrix which maps between world space and the shadow buffer : Matrix shadow = light_view * light_projection * vp; This matrix is used to render the object into the shadow buffer - the same matrix is also used when mapping the buffer onto the geometry later (it's the same thing - you want to take a world position and find out where it is on the shadow buffer - no inverse as far as I can see). The details are a bit more complicated for point lights, but basically the same (I think - I haven't actually implemented them yet). > -I concede that I don't have the projective shadow mapping > paper in front > of me to see if it changes these rules in some fundamental way, but I > thought that it didn't; it just "stretched" the shadow buffer > on creation. > -It does this stretching into a unit cube in screen space, > which projects > down to a point at the viewer. > -Now, let's consider: how does this technique support lights > behind the > camera? > -I thought the idea was to make the shadow map loosely cover the post- > projection unit cube, but not fit it exactly to any specific > face or area > of the cube. Yes, it's a loose fit because you are aiming a camera at a cube (the camera is always on a plane at Z=1+xxxx (where xxxx depends on the distances of the near and far clipping planes of the viewer camera - usually quite small, about 0.2 for my setup). > -Thus, the projection for the shadow map is actually such > that it correctly > deals with anything between the light and the scene, that > further from the > light than whatever your near clipping distance is when > rendering the map. > -If this is indeed a reasonable set-up, then clamping depth to 0 when > rendering the map seems like it should work fine. With the setup above simply clamping the depth does not seem to work. > -I thought you needed one shadow buffer per light anyway? Yes, it's the same as with the normal shadow buffer algorithm but you get much better usage of the pixels. If you are trying to render shadows for a large area this helps a lot. With a 1024x1024 buffer I am able to draw shadows for everything in my level in one go - if I want to get the same kind of results using the normal algorithm I need to tile the buffer quite a few times. > -Am I missing something really major here? > -Your comments are making me very confused. That's a sad feeling. I think this is probably my fault - I'm not very good at explaining this sort of thing (two projections in a row IS confusing to start with). Steve. |
From: Jon W. <hp...@mi...> - 2002-09-27 04:31:34
|
Steve: > This matrix is used to render the object into the shadow buffer > - the same matrix is also used when mapping the buffer onto the > geometry later (it's the same thing - you want to take a world > position and find out where it is on the shadow buffer - no inverse > as far as I can see). You are correct. My mistake. > The details are a bit more complicated for point lights, but > basically the same (I think - I haven't actually implemented them > yet). Again, I don't have the paper reference handy right here, but as far as I recall, a point light source moves to another point in the post- perspective space. That may mean that they distort over large distances (or with wide perspective) -- my memory gets fuzzy. > > -Am I missing something really major here? > > -Your comments are making me very confused. That's a sad feeling. > > I think this is probably my fault - I'm not very good at explaining this > sort of thing (two projections in a row IS confusing to start with). Yes, a little bit :-) Thatcher: > So the problem here may be that when rendering into the shadow buffer, > objects in the scene *still* need to observe the *camera* near-plane > (not just the light near-plane); if any faces cross that camera > near-plane, you will get wacky projection. Maybe, I think. I need to > check the math to be sure, so don't take my word for it. This is the part I don't get. Well, unless you say this, because you want to use the clipping plane to avoid going through the singularity at the 0-plane in camera space. But suppose the light projection doesn't actually include the camera view matrix. Instead, the shadow buffer is fitted to the view space clip frustum, using an asymmetric shadow buffer view frustum, with origin in the light source point. This means that you don't get "perfect" fit on the perspective projected frustum, but you should get a fit that's as good as it can be made, while still dealing with objects behind the camera. You could switch between "real" projected shadow buffers, and this "fitted" version, based on where the light position is, I suppose (although there would be a pop visible as you passed the switching point). Or you could just keep going with a custom-fitted frustum for the light source for all cases, although you'd need a better way of figuring out what that frustum looks like in the general case. Cheers, / h+ |
From: Thatcher U. <tu...@tu...> - 2002-09-28 05:01:43
|
On Sep 26, 2002 at 09:30 -0700, Jon Watte wrote: > > Thatcher: > > > So the problem here may be that when rendering into the shadow buffer, > > objects in the scene *still* need to observe the *camera* near-plane > > (not just the light near-plane); if any faces cross that camera > > near-plane, you will get wacky projection. Maybe, I think. I need to > > check the math to be sure, so don't take my word for it. > > This is the part I don't get. Well, unless you say this, because you want > to use the clipping plane to avoid going through the singularity at the > 0-plane in camera space. Right, that's the issue right there, the camera's 0-plane. > But suppose the light projection doesn't actually include the camera > view matrix. Instead, the shadow buffer is fitted to the view space > clip frustum, using an asymmetric shadow buffer view frustum, with origin > in the light source point. > > This means that you don't get "perfect" fit on the perspective projected > frustum, but you should get a fit that's as good as it can be made, while > still dealing with objects behind the camera. You could switch between > "real" projected shadow buffers, and this "fitted" version, based on where > the light position is, I suppose (although there would be a pop visible > as you passed the switching point). Or you could just keep going with a > custom-fitted frustum for the light source for all cases, although you'd > need a better way of figuring out what that frustum looks like in the > general case. I went back and looked at the paper. So I think what you're suggesting, with the "fitted" frustum, is exactly what the paper recommends, and what I did in my demo. Basically, move the pseudo camera projection point back, to include the necessary occluders. You can do it smoothly, to avoid any pops, but the more you do it, the more it ruins the advantage of the perspective shadow buffer effect. The paper describes this as a recommended alternative to using two shadow buffers. But, I think the depth-clamping suggestion might work instead: use the real camera projection matrix, render occluder objects, clipped in front of camera_z=0. Then flip the camera projection around, and render occluders clipped behind camera_z=0, with a constant minimum z value. Only three problems: 1) the clipping, 2) the small gap on either side of camera_z=0, and 3) does this actually work? If it does work, I think the other problems are minor. -- Thatcher Ulrich http://tulrich.com |
From: Thatcher U. <tu...@tu...> - 2002-09-26 23:06:22
|
On Thu, 26 Sep 2002, Jon Watte wrote: > > > The problem with that is, in order to render the shadows into the > > scene you do a projective mapping of the shadow buffer... because of > > the perspective transform used to draw the buffer, there's a > > difficulty in making the images of those objects to put in the buffer. > > I.e., those objects are behind the near-clip plane of the > > shadow-buffer frustum. So you need to clip; otherwise the shadows of > > those objects will be upside down and/or all screwed up in the shadow > > buffer. > > I don't quite get it (I just may be dense here). I'm pretty sure I was being dense, and made a very confusing post. I think there's still an issue though, that I was clumsily groping towards. > -The objects behind the viewer camera, are still in front of the light, > right? > -When rendering the shadow buffer, the viewer for that is at the light? > -Thus, the objects casting shadows are always in front of the viewer when > rendering the buffer. All agreed so far. > -Modulo precision issues, you could push your shadow map near clip plane, > when rendering the buffer, as close to the light source as necessary to > avoid clipping out objects. Or you could use depth clamping. So the problem here may be that when rendering into the shadow buffer, objects in the scene *still* need to observe the *camera* near-plane (not just the light near-plane); if any faces cross that camera near-plane, you will get wacky projection. Maybe, I think. I need to check the math to be sure, so don't take my word for it. > -Now, when rendering into the color buffer, you have to do a per-pixel > projection from screen space into the shadow buffer. > -This projection has to use the inverse _light_ projection matrix. > -Thus, all objects in the shadow buffer are guaranteed to be closer than > the light at this point. Yup, I believe you are correct here. > -I concede that I don't have the projective shadow mapping paper in front > of me to see if it changes these rules in some fundamental way, but I > thought that it didn't; it just "stretched" the shadow buffer on creation. > -It does this stretching into a unit cube in screen space, which projects > down to a point at the viewer. > -Now, let's consider: how does this technique support lights behind the > camera? > -I thought the idea was to make the shadow map loosely cover the post- > projection unit cube, but not fit it exactly to any specific face or area > of the cube. Agreed. > -Thus, the projection for the shadow map is actually such that it correctly > deals with anything between the light and the scene, that further from the > light than whatever your near clipping distance is when rendering the map. Not sure I agree here; see comment above. If you visualize the unit cube in screen space, and the light is somewhere in that space (outside the cube), you can see how some object also outside the unit cube can cast shadows into the unit cube. Now, an object that crosses z==0 in camera space transforms to some infinitely huge misshapen thing in screen-space, so that makes it impossible for the shadow-buffer frustum to completely encompass that object. The paper talks about doing some analysis to figure out just what type of volume the shadow buffer needs to take an image of, which IIRC is where the earlier-mentioned near-plane hijinks come in. > -If this is indeed a reasonable set-up, then clamping depth to 0 when > rendering the map seems like it should work fine. Yeah, the depth clamping part sounds like a worthwhile and workable thing, I'm just not sure about the rest of it, i.e. fitting the necessary image in the shadow buffer. > > the appeal. Maybe one perspective shadow buffer, and one conventional > > shadow buffer could do a decent job for everything? I.e. classify > > each casting object according to which technique will give better > > results for the current light & view position. Then add them together > > on-screen. > > -I thought you needed one shadow buffer per light anyway? Yes. > -Am I missing something really major here? Nope; I'm only saying I'm scared of 2+ buffers per light, instead of just one... > -Your comments are making me very confused. That's a sad feeling. Sorry, like I said I think I was confusing my clip planes in the earlier post. My grasp on this topic is very tentative, especially since I haven't actually worked on it in months, so don't assume I know what I'm talking about :) -Thatcher |
From: Tom F. <to...@mu...> - 2002-09-25 17:38:24
|
Oh, OK - yes, the near clip plane is just going to make them all vanish. Doh. OK, so the _projection_ is correct, it's just that the clip plane is doing the wrong test (_precisely_ the wrong tets in fact - you want to only draw the things behind the clip plane as saturated). I wonder if simply negating the projection matrix (i.e. changing all its signs) is enough. Oh hang on - that also switches the signs of the other clip planes, producing nothing. Erm... must be something you can do with the projection matrix to get it working, but I'd need to work through the maths. Things completely behind the -ve clip plane (i.e. further than 10cm or whatever behind the camera) don't seem too tricky - just flip a few signs and use the viewport re-mapping to map all Z values to 0.0f. And just hope there isn't anything between NCP and -NCP in the scene that casts a shadow (probably won't be). Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Thatcher Ulrich [mailto:tu...@tu...] > Sent: 25 September 2002 18:10 > To: gda...@li... > Subject: RE: [Algorithms] Perspective shadow maps > > > On Wed, 25 Sep 2002, Tom Forsyth wrote: > > > > -----Original Message----- > > > From: Steve Legg [mailto:st...@mo...] > > > > > > Hello, > > > > > > I have also implemented perspective shadow maps (so far only for > > > directional lights). I'd be interested in hearing how people have > > > gotten around the two biggest problems, namely: 1) when > the light is > > > parallel to the viewing direction the point light in > post-perspective > > > space ends up very close to the unit cube (requires a very wide > > > viewing angle which reduces the shadow resolution > drastically). And > > > 2) how to deal with objects behind the camera casting shadows. > > > > > > The original paper suggests moving the viewer backwards > as a solution > > > to these problems, however this can end up reducing the shadow > > > resolution > > > too much. > > > > > > I have a hack which helps a lot with the first problem, > but the second > > > is difficult (especially with objects a long way behind > the camera). > > > Its always possible to just do another pass for these > objects but this > > > reduces the main advantage (in my opinion) of using this method - > > > Which is that you can draw shadows for everything in a single pass > > > (at the expense of needing a larger shadow buffer). > > > > Since these objects are only casting shadows, can you not > just draw them, > > but clamp their depth values to 0.0 (or 1.0 - depends which > ray round you do > > your stuff). Their actual shadows don't need any depth > precision, all you > > need to know about them is that they are in front of > everything else. > > The problem with that is, in order to render the shadows into the > scene you do a projective mapping of the shadow buffer... because of > the perspective transform used to draw the buffer, there's a > difficulty in making the images of those objects to put in the buffer. > I.e., those objects are behind the near-clip plane of the > shadow-buffer frustum. So you need to clip; otherwise the shadows of > those objects will be upside down and/or all screwed up in the shadow > buffer. > > The straightforward hack is to move the projection point of the shadow > buffer backwards, until it's behind everything that needs to cast a > shadow. However, like Steve says, this compromises the resolution of > the buffer, and reduces the benefit of doing the perspective > transform. Certainly it works up to a point, and the degradation is > fairly smooth, but eventually you get really pixelated shadows. > > But, maybe there's a clever trick that would work? Somehow use a > different transform when drawing the problem objects into the shadow > buffer? But, I think that's bound to produce screwy results when you > reproject the buffer into the scene. Or, maybe there's a way to > exploit the reverse-projection thing. I doubt it though. > > So multiple shadow buffers, or somehow avoiding the situation, are the > only solutions I know of. I think avoidance is feasible if you don't > use dynamic lights to shadow the environment, so it's basically just > characters that use the shadow buffers. But then again that reduces > the appeal. Maybe one perspective shadow buffer, and one conventional > shadow buffer could do a decent job for everything? I.e. classify > each casting object according to which technique will give better > results for the current light & view position. Then add them together > on-screen. > > -T |
From: Steve L. <st...@mo...> - 2002-09-25 17:51:36
|
> > I have also implemented perspective shadow maps (so far only for > > directional lights). I'd be interested in hearing how people have > > gotten around the two biggest problems, namely: 1) when > the light is > > parallel to the viewing direction the point light in > post-perspective > > space ends up very close to the unit cube (requires a very wide > > viewing angle which reduces the shadow resolution drastically). And > > 2) how to deal with objects behind the camera casting shadows. > > My thoughts (and I haven't done it yet) are: > > 1) Move the light a little bit out so that it's not within some small > cone extending out from the viewer in the direction of the viewing > vector. 99% of the time, you won't notice. I tried this but it didn't quite work - the range of angles where you are "too close" to the unit-cube in post-perspective space is fairly large. Obviously this depends on how much resolution you are willing to loose but I found that in order to keep my shadows looking acceptable I had to move the light too much (you could see the shadows moving). Instead of changing the light direction you can adjust the position of the point light in post-perspective space (eg. By moving it back in Z) - I think this has the same effect. Steve. |
From: Nicolas T. <nic...@po...> - 2002-09-26 08:42:51
|
You could use the pixel shader (assuming you've got one on the target platform) to clip negative w values. I.e. assuming you're modulating your shadow map with the background (i.e. non-shadowed areas are white) you could use this pseudo-code: ps.1.4 // Haven't tried with lower shaders, should also be possible (?) texcrd r0.xyz, t1.xyw // r0 = lightmap texture coordinates texld r3, t1_dw.xyw // r3 = lightmap color ; c0 is full white (1.0, 1.0, 1.0, 1.0) cmp r0.rgb, r0.z, r3.rgb, c0.rgb // Clamp color to white if w sign is negative. This is to avoid "back projection". If you don't have access to pixel shaders then you could use a 1D "clipping" texture in a second layer to perform the clipping. Nick - PowerVR DevRel -----Original Message----- From: Steve Legg [mailto:st...@mo...] Sent: 25 September 2002 19:53 To: gda...@li... Subject: RE: [Algorithms] Perspective shadow maps > > > 2) how to deal with objects behind the camera casting shadows. > > Since these objects are only casting shadows, can you not > just draw them, > > but clamp their depth values to 0.0 (or 1.0 - depends which > ray round you do > > your stuff). Their actual shadows don't need any depth > precision, all you > > need to know about them is that they are in front of > everything else. > > The problem with that is, in order to render the shadows into the > scene you do a projective mapping of the shadow buffer... because of > the perspective transform used to draw the buffer, there's a > difficulty in making the images of those objects to put in the buffer. > I.e., those objects are behind the near-clip plane of the > shadow-buffer frustum. So you need to clip; otherwise the shadows of > those objects will be upside down and/or all screwed up in the shadow > buffer. I think there may be something in what Tom is suggesting though (I have tried a few hacks along this line myself). Points behind the camera end up at coordinates with -w in post-perspective space - I think this is why they get clipped when performing the second projection (the point light camera). If you could somehow clamp them at w=0 before the second projection wouldn't they just end up at the far plane of the unit-cube? (at z=-1 in post-perspective space, the opposite end from the point light). Maybe its possible if you break the transform into two steps? Steve. ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Andrew G. <and...@ni...> - 2002-09-28 09:50:50
|
I have implemented this trick and it works perfectly well. Though in my realization color channel is used to represent depth. This gives bilinear filtering of depth values and it is possible to set offset to just 1 without additional hacks. Also it runs on a wider range of hardware. When using color channel as depth there is no need for additional pass with depth clamping since it happens through color saturation. In the original paper authors propose to use modified world to camera projection to get rid of problems with geometry between camera and near clipping plane not casting shadows. I have not encountered this problem, and actually don't see how this could happen unless near clipping plane for shadow projection is chosen wrong and clips shadow casting geometry. But it was necessary to use modified camera projection for a different reason. Our far/near z ratio is about 1000 and when I used this matrix for the trick shadow texels looked elongated. Probably this happens due to compressing of all geometry near the clipping plane. Since we fit shadow projection to whole cube and point from where this happens can be far from (0,0,1) we can end up using only fraction of depth texture resolution because all geometry will map on a line in shadow texture. In current realization modified camera projection is used with far/near ratio of 2. To fit shadow projection I used shadow projection camera space. Actually it is easier then it sounds. To start peek any frustrum angle, direct camera from projected point to (0,0,0) and get original shadow projection matrix. Then project corners of convex polyhedron containing all shadow target geometry to shadow projection camera space. Then modify x & y rows of shadow projection matrix so that all points fit into (-1,-1)...(1,1) range and you are done. Near/far clipping planes can be adjusted this way too. > This matrix maps from world space to post-perspective space. > It maps points inside the viewer frustum to the cube > (-1,-1,-1)...(1,1,1). Actually it maps to parallelepiped (-1,-1,0)...(1,1,1), (0,0,0) - center point on near clipping plane. Could save some resolution here. I had some precision problems until I realized that it is better to use parallel projection instead of perspective one when light source projects to some distant point. But it was said in the paper I believe |
From: Jon W. <hp...@mi...> - 2002-09-29 00:07:58
|
> I have implemented this trick and it works perfectly well. Though in my > realization color channel is used to represent depth. This gives > bilinear filtering of depth values and it is possible to set offset to Biliear filtering of depth values is not what you want. In the edge case, you want the shadowed amount to smoothly vary from 1 to 0. However, with bilinear filtering of the depth value and a single sample, you will only get a single 1 or 0 answer out. You need hardware that supports percent- closer filtering instead. Or fragment shader hardware, where you can write it yourself (at the cost of, oh, 10+ fragment ops...) Cheers, / h+ |
From: Andrew G. <and...@ni...> - 2002-09-30 07:22:07
|
Actually bilinear filtering of depth values is exactly what I want. If it was implemented for depth textures quality of depth shadows would be comparable to stencil volume shadows while being faster due to lower fillrate demand. One more advantage of depth shadows is ability to cast shadows from transparent objects where transparency is binary (either transparent or not). I am using additional pass to smooth shadows with 4 tap filter at the cost of 4 texture fetches and 8 fragment ops. One more idea - in case of using depth textures it is possible to use nv_depth_clamp extension to get rid of second pass with clamping depths to zero. What a pity it is not exposed in DirectX. -----Original Message----- From: Jon Watte [mailto:hp...@mi...] Sent: Sunday, September 29, 2002 4:06 AM To: Andrew Goulin; gda...@li... Subject: RE: [Algorithms] RE: Perspective shadow maps > I have implemented this trick and it works perfectly well. Though in my > realization color channel is used to represent depth. This gives > bilinear filtering of depth values and it is possible to set offset to Biliear filtering of depth values is not what you want. In the edge case, you want the shadowed amount to smoothly vary from 1 to 0. However, with bilinear filtering of the depth value and a single sample, you will only get a single 1 or 0 answer out. You need hardware that supports percent- closer filtering instead. Or fragment shader hardware, where you can write it yourself (at the cost of, oh, 10+ fragment ops...) Cheers, / h+ |
From: Jon W. <hp...@mi...> - 2002-09-30 17:25:51
|
> Actually bilinear filtering of depth values is exactly what I want. Well, good for you. It does give a different look. > transparent or not). I am using additional pass to smooth shadows with 4 > tap filter at the cost of 4 texture fetches and 8 fragment ops. That's what percent-closer filtering gives you for free. Which is why it's a shame you have to code it yourself on some pieces of hardware :-( Cheers, / h+ |
From: Rob J. <poc...@nt...> - 2002-09-30 07:54:11
|
> From: "Jon Watte" <hp...@mi...> >SNIP> ' However, with bilinear filtering of the depth value and a single sample, you will only get a single 1 or 0 answer out. You need hardware that supports percent- closer filtering instead. Or fragment shader hardware, where you can write it yourself (at the cost of, oh, 10+ fragment ops...)' This is exatly what I'm doing (% closer filtering) using the NV30 emulator for Nvidia's latest Cg shader contest. I get, oh, 3 fps :) Rob J. |
From: Steve L. <st...@mo...> - 2002-09-25 15:53:32
|
> Yes, Thatcher Ulrich has. Check this thread (cunningly titled > "Perspective > Shadow Maps"): > http://sourceforge.net/mailarchive/message.php?msg_id=1514744 > > There was also someone else on the list who mentioned they > had used them. > And we'll be using them shortly as well - too many other > things to implement > first! > > > Tom Forsyth - purely hypothetical Muckyfoot bloke. > > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. > > > -----Original Message----- > > From: Petr Smilek [mailto:gu...@pr...] > > Sent: 24 September 2002 20:50 > > To: gda...@li... > > Subject: [Algorithms] Perspective shadow maps > > > > > > Hi, > > does anybody successfuly implemented and used > > perspective shadow maps (except authors) in practice ? > > Is there some demo showing them in action available ? > > Thank you in advance. > > > > Petr S. Hello, I have also implemented perspective shadow maps (so far only for directional lights). I'd be interested in hearing how people have gotten around the two biggest problems, namely: 1) when the light is parallel to the viewing direction the point light in post-perspective space ends up very close to the unit cube (requires a very wide viewing angle which reduces the shadow resolution drastically). And 2) how to deal with objects behind the camera casting shadows. The original paper suggests moving the viewer backwards as a solution to these problems, however this can end up reducing the shadow resolution too much. I have a hack which helps a lot with the first problem, but the second is difficult (especially with objects a long way behind the camera). Its always possible to just do another pass for these objects but this reduces the main advantage (in my opinion) of using this method - Which is that you can draw shadows for everything in a single pass (at the expense of needing a larger shadow buffer). Thanks, Steve. |