Thread: RE: [Algorithms] object-space bump mapping
Brought to you by:
vexxed72
From: Sim D. <SDi...@nv...> - 2002-03-26 20:49:25
|
I've been thinking about this, too. If you put the normals in default pose space ( ie animation matrices are all identity ), you can skin the light vectors, but skip the pose space -> texture space step altogether. So, at the cost of more video memory for unique normal maps ( which can also be used for bullet holes, updating clothing, etc. ), you save 3 vertex instructions per bumped light. -----Original Message----- From: Jon Watte [mailto:hp...@mi...] Sent: Tuesday, March 26, 2002 12:13 PM To: Charles Bloom; gda...@li... Subject: RE: [Algorithms] object-space bump mapping Yes, I've been thinking about this for a long time (indeed, that's the per-pixel bump solution that I first got interested in). The problem, as you allude a little bit to, is that this just does not scale at all to skinned/posed meshes, and only marginally better to morphed meshes (you'd need to calculate your normal map as a normalized weighted sum of one normal map per morph target, which eats texture space and stages like there's no tomorrow). Now, if you had sufficient pixel shading power, you could probably do the blended normal rotate for at least one joint using a cunning sequence of dot3 applications. Perhaps the Radeon 8500 could even do that today in hardware. But then you'd have burnt your textures so you couldn't actually do reflective specular diffuse (or whatever) with that bump... I'd love to be proven wrong. Cheers, / h+ > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of > Charles Bloom > Sent: Tuesday, March 26, 2002 11:36 AM > To: gda...@li... > Subject: [Algorithms] object-space bump mapping > > > > We've been thinking alot about object-space bump-mapping here > at OddWorld, and it seems to me that it's much more attractive > than what people usually do, which is surface-local space > bump-mapping. Let me review for clarity, and I'll assume for > the moment static objects (eg. not skinned) : > > "surface local bump mapping" > > normal map is surface-local; eg. flat surfaces have normals > that are just unit z > > per vertex, you must store a local frame (eg. as two vectors) > > in a v-shader, transform the light vector into surface-local > space at each vertex. > > this surface-local-per-vertex L vector is interpolated and > handed to the pshader > > in the pshader : > L may no longer be normalized, so renormalize if you desire > > dot L with N from the normal map > > ++++++++++++++++++++++++++++++ > > "object space bump mapping" > > normal map is in object space; eg. very colorful > > provide L in object space as a pixel-shader constant > > zero per-vertex work needed, no local frame needed > > in the pshader : > dot L with N from the normal map > no renormalization needed > > ++++++++++++++++++++++++++++++ > > I actually did object-space bump mapping in Galaxy1 because > it can be done with the fixed-function pipe; all you do is > put the light color in the tfactor, and you just have a single > DP3 operation! > > Anyway, here are the disadvantages of object-space bump > mapping : > > 1. cannot tile or reuse the normal map; that is, the geometry > and the normal map are tied explicity > > 2. the normal map does not palletize as well. surface-local > bump maps take pretty well to palettizing. > > 3. behaves badly under mip-mapping; eg. will change overall > brightness as it goes into the distance > > And the advantages are : > > 1. faster > > 2. no need to store per-vertex frame, so less memory used > > 3. no problems with finding a smooth local frame coverage of > your object > > (Caveat about #3 : you actually still have this problem, since > you must uv-map your object-space normal map onto the object. > However, this operation is actually much more forgiving than > finding a smooth coverage with local frames. For example, you > can replicate chunks of pixels in the uv map to patch up seams. > It's mathematically impossible to smoothly put a local frame on > a sphere, but it is completely possible to C0-smoothly cover it > with textures, using overlapping and seam-matching). > > > ---------------------------------------------------- > Charles Bloom cb...@cb... www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Dan B. <dan...@mi...> - 2002-03-26 22:31:36
|
Along these lines, something we have tried (after talking to Hughes Hoppe) is using an object spaced normal map on a progressive-mesh that has been reduced. Example: You can take the high-res mesh the artist created and p-mesh it down to a lower res. Then you use the original high res mesh to compute the normals. The result is a mesh which looks almost identical to the original mesh at often 25% of the polygon vertex count... I made a utility that did this, we'll see if we actually have time to ship it:) Dan Baker Direct3D -----Original Message----- From: Charles Bloom [mailto:cb...@cb...]=20 Sent: Tuesday, March 26, 2002 11:36 AM To: gda...@li... Subject: [Algorithms] object-space bump mapping We've been thinking alot about object-space bump-mapping here at OddWorld, and it seems to me that it's much more attractive than what people usually do, which is surface-local space bump-mapping. Let me review for clarity, and I'll assume for the moment static objects (eg. not skinned) : "surface local bump mapping" normal map is surface-local; eg. flat surfaces have normals that are just unit z per vertex, you must store a local frame (eg. as two vectors) in a v-shader, transform the light vector into surface-local space at each vertex. this surface-local-per-vertex L vector is interpolated and handed to the pshader in the pshader : L may no longer be normalized, so renormalize if you desire dot L with N from the normal map ++++++++++++++++++++++++++++++ "object space bump mapping" normal map is in object space; eg. very colorful provide L in object space as a pixel-shader constant zero per-vertex work needed, no local frame needed in the pshader : dot L with N from the normal map no renormalization needed ++++++++++++++++++++++++++++++ I actually did object-space bump mapping in Galaxy1 because it can be done with the fixed-function pipe; all you do is put the light color in the tfactor, and you just have a single DP3 operation! Anyway, here are the disadvantages of object-space bump mapping : 1. cannot tile or reuse the normal map; that is, the geometry and the normal map are tied explicity 2. the normal map does not palletize as well. surface-local bump maps take pretty well to palettizing. 3. behaves badly under mip-mapping; eg. will change overall brightness as it goes into the distance And the advantages are : 1. faster 2. no need to store per-vertex frame, so less memory used 3. no problems with finding a smooth local frame coverage of your object (Caveat about #3 : you actually still have this problem, since you must uv-map your object-space normal map onto the object. However, this operation is actually much more forgiving than finding a smooth coverage with local frames. For example, you can replicate chunks of pixels in the uv map to patch up seams. It's mathematically impossible to smoothly put a local frame on a sphere, but it is completely possible to C0-smoothly cover it with textures, using overlapping and seam-matching). ---------------------------------------------------- Charles Bloom cb...@cb... www.cbloom.com _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 |
From: Doug R. <DR...@nv...> - 2002-03-27 01:03:34
|
I have been looking into doing object space bump mapping for a little while also. It is easy to simplifiy a mesh and generate the normals from the original model, BUT - simplification has to preserve the hierarchy. - normals have to be transformed into the same space as the triangle or you cannot animate the surface or combine with existing bump maps. - sharp edges cannot share normal map edges (one texel will map on both sides of a sharp edge). We are working on this and hope to have tools that will create simplified meshes + normals maps. -Doug NVIDIA Developer Tools -----Original Message----- From: Charles Bloom [mailto:cb...@cb...] Sent: Tuesday, March 26, 2002 4:26 PM To: Game Dev Algorithms (E-mail) Subject: RE: [Algorithms] object-space bump mapping So what you're doing there is something like generating L*N (where L comes from a row in the texture matrix) and looking that up in a 1d texture to get a remapping of L*N , right? same thing for H*N to do specular. At 03:54 PM 3/26/2002 -0800, Jon Watte wrote: > > I can't help feeling that there should be a better lighting approach > > given current hardware, that doesn't use the old hardware lighting ideas > > as building blocks. Maybe one day, I'll figure it one out! > >I use the texture coordinate generation from normal maps and >reflection maps, and use the texture matrix to collapse to a >1D or 2D texture look-up (in the cases where a cube map would >be overkill). Works for all diffuse (NORMAL_MAP) and specular >(REFLECTION_MAP) lighting equations I've tried so far. You >can even use non-uniform scale to hack some non-uniform surface >scattering properties. ---------------------------------------------------- Charles Bloom cb...@cb... www.cbloom.com _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Willmott, A. <AWi...@ma...> - 2002-03-27 01:07:25
|
Yeah, I used that. It also provides a handy quick-and-dirty LOD -- you can switch your skinning code between four-bone skinning and one-bone skinning depending on character LOD. (Maybe dropping through three and two bone skinning if it's worth it.) At some point you should start collapsing bone influences too of course. Smooth skinned meshing only pays off if you have a good close-up view of the problem areas (joints); depending on the game, many of your characters may be far enough away to ignore it. I'd be very interested in how well the "light skinning" approach might work. Has anyone tried it? If you're dealing with tessellation (subdiv surfaces etc.), and a wide range of LOD, not bothering with the tangent stuff could be quite handy. Cheers, Andrew > -----Original Message----- > From: Charles Bloom [mailto:cb...@cb...] > Sent: Tuesday, March 26, 2002 4:23 PM > To: gda...@li... > Subject: RE: [Algorithms] object-space bump mapping > > > > About skinned models : > > Here's a trick you should all be using : sort your bone indices so > that the highest weight is the first index. Then just use this index > for lighting, normals, etc. eg. don't do full weighted skinning. It > looks just fine as long as your artists aren't insane (caveat > emptor!). > > Anyway, to do object-space bumping with skins, just transform the > light vector into "default pose space" at each vertex. This is just > one matrix multiply. You will then have to renormalize your light > vector in the pixel shader if you have large triangles. > > > ---------------------------------------------------- > Charles Bloom cb...@cb... www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Doug R. <DR...@nv...> - 2002-03-27 01:32:43
|
Oh, and you usually cannot rely on the given texture coordinates becuase they do not guarantee that there is a one to one mapping of texels to oject space. For example, a cylindrical texture mapping of a head will produce multiple uses of the same texel at the ear. -Doug -----Original Message----- From: Doug Rogers Sent: Tuesday, March 26, 2002 5:03 PM To: Game Dev Algorithms (E-mail) Subject: RE: [Algorithms] object-space bump mapping I have been looking into doing object space bump mapping for a little while also. It is easy to simplifiy a mesh and generate the normals from the original model, BUT - simplification has to preserve the hierarchy. - normals have to be transformed into the same space as the triangle or you cannot animate the surface or combine with existing bump maps. - sharp edges cannot share normal map edges (one texel will map on both sides of a sharp edge). We are working on this and hope to have tools that will create simplified meshes + normals maps. -Doug NVIDIA Developer Tools -----Original Message----- From: Charles Bloom [mailto:cb...@cb...] Sent: Tuesday, March 26, 2002 4:26 PM To: Game Dev Algorithms (E-mail) Subject: RE: [Algorithms] object-space bump mapping So what you're doing there is something like generating L*N (where L comes from a row in the texture matrix) and looking that up in a 1d texture to get a remapping of L*N , right? same thing for H*N to do specular. At 03:54 PM 3/26/2002 -0800, Jon Watte wrote: > > I can't help feeling that there should be a better lighting approach > > given current hardware, that doesn't use the old hardware lighting ideas > > as building blocks. Maybe one day, I'll figure it one out! > >I use the texture coordinate generation from normal maps and >reflection maps, and use the texture matrix to collapse to a >1D or 2D texture look-up (in the cases where a cube map would >be overkill). Works for all diffuse (NORMAL_MAP) and specular >(REFLECTION_MAP) lighting equations I've tried so far. You >can even use non-uniform scale to hack some non-uniform surface >scattering properties. ---------------------------------------------------- Charles Bloom cb...@cb... www.cbloom.com _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 |
From: Tom F. <to...@mu...> - 2002-03-27 11:31:50
|
This process really needs to be part of any VIPM (actually, any CLOD) process. When using vertex lighting, VIPM pops are something like ten times more noticeable. If you replace it by bumpmapping (even repoy old emboss bumpmapping), you can reduce your poly count a lot more for equivalent "lack of popping". And this reduction more than makes up for the more complex algorithm. I do it the other way round of course - I generate my high-poly meshes using a displacement map, so you already have a bumpmap suitable for use. But I also have a high-poly->displacement map converter, so the artists can generate the source data in whatever format they like - disp.maps or high-poly meshes. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Dan Baker [mailto:dan...@mi...] > Sent: 26 March 2002 22:32 > To: gda...@li... > Subject: RE: [Algorithms] object-space bump mapping > > > > Along these lines, something we have tried (after talking to Hughes > Hoppe) is using an object spaced normal map on a progressive-mesh that > has been reduced. > > Example: You can take the high-res mesh the artist created > and p-mesh it > down to a lower res. Then you use the original high res mesh > to compute > the normals. The result is a mesh which looks almost identical to the > original mesh at often 25% of the polygon vertex count... > > I made a utility that did this, we'll see if we actually have time to > ship it:) > > Dan Baker > Direct3D > > > -----Original Message----- > From: Charles Bloom [mailto:cb...@cb...] > Sent: Tuesday, March 26, 2002 11:36 AM > To: gda...@li... > Subject: [Algorithms] object-space bump mapping > > > We've been thinking alot about object-space bump-mapping here > at OddWorld, and it seems to me that it's much more attractive > than what people usually do, which is surface-local space > bump-mapping. Let me review for clarity, and I'll assume for > the moment static objects (eg. not skinned) : > > "surface local bump mapping" > > normal map is surface-local; eg. flat surfaces have normals > that are just unit z > > per vertex, you must store a local frame (eg. as two vectors) > > in a v-shader, transform the light vector into surface-local > space at each vertex. > > this surface-local-per-vertex L vector is interpolated and > handed to the pshader > > in the pshader : > L may no longer be normalized, so renormalize if you desire > > dot L with N from the normal map > > ++++++++++++++++++++++++++++++ > > "object space bump mapping" > > normal map is in object space; eg. very colorful > > provide L in object space as a pixel-shader constant > > zero per-vertex work needed, no local frame needed > > in the pshader : > dot L with N from the normal map > no renormalization needed > > ++++++++++++++++++++++++++++++ > > I actually did object-space bump mapping in Galaxy1 because > it can be done with the fixed-function pipe; all you do is > put the light color in the tfactor, and you just have a single > DP3 operation! > > Anyway, here are the disadvantages of object-space bump > mapping : > > 1. cannot tile or reuse the normal map; that is, the geometry > and the normal map are tied explicity > > 2. the normal map does not palletize as well. surface-local > bump maps take pretty well to palettizing. > > 3. behaves badly under mip-mapping; eg. will change overall > brightness as it goes into the distance > > And the advantages are : > > 1. faster > > 2. no need to store per-vertex frame, so less memory used > > 3. no problems with finding a smooth local frame coverage of > your object > > (Caveat about #3 : you actually still have this problem, since > you must uv-map your object-space normal map onto the object. > However, this operation is actually much more forgiving than > finding a smooth coverage with local frames. For example, you > can replicate chunks of pixels in the uv map to patch up seams. > It's mathematically impossible to smoothly put a local frame on > a sphere, but it is completely possible to C0-smoothly cover it > with textures, using overlapping and seam-matching). > > > ---------------------------------------------------- > Charles Bloom cb...@cb... www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > |
From: Tom F. <to...@mu...> - 2002-03-27 11:50:08
|
When generating disp.maps from high-poly artwork (and also when generating Unique Textures and other CPU-generated stuff) I had the same problem. My solution was simply to get the artists to manually wrap another texture layer over the whole model, and this is the layer that the poly/UT data gets squashed into. Then they can do whatever stuff they like with the other texture layers (mirror, tile, wrap, replicate, etc) - they all get rendered onto this layer, which is guaranteed unique. It's guaranteed unique by the preprocessor rendering (in software) each triangle onto it and as it does checking that no previous triangle has been rendered to those pixels. Otherwise it throws up an error and the artist goes back and adjusts the texturing. For common shapes like cars and humans, we have a texture with the usual features (ears, nose, eyes, hands, waist, knees, etc) pre-drawn on, so that the texturing is a very quick job. It also means you can easily find correspondence information between two unrelated humanoid meshes, so you can mix'n'match body parts (and morph between them) to increase the variety of people in the game. This also ties in closely with the "everything by example" stuff that the Microsoft research guys are doing - it lets the artists make people by example, and then the software lerps and cut'n'pastes between them. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Doug Rogers [mailto:DR...@nv...] > Sent: 27 March 2002 01:33 > To: Game Dev Algorithms (E-mail) > Subject: RE: [Algorithms] object-space bump mapping > > > Oh, and you usually cannot rely on the given texture > coordinates becuase > they do not guarantee that there is a one to one mapping of > texels to oject > space. For example, a cylindrical texture mapping of a head > will produce > multiple uses of the same texel at the ear. > > -Doug > > > -----Original Message----- > From: Doug Rogers > Sent: Tuesday, March 26, 2002 5:03 PM > To: Game Dev Algorithms (E-mail) > Subject: RE: [Algorithms] object-space bump mapping > > > I have been looking into doing object space bump mapping for > a little while > also. > > It is easy to simplifiy a mesh and generate the normals from > the original > model, BUT > > - simplification has to preserve the hierarchy. > - normals have to be transformed into the same space as the > triangle or you > cannot animate the surface or combine with existing bump maps. > - sharp edges cannot share normal map edges (one texel will > map on both > sides of a sharp edge). > > > We are working on this and hope to have tools that will > create simplified > meshes + normals maps. > > -Doug > NVIDIA Developer Tools > > > > > > -----Original Message----- > From: Charles Bloom [mailto:cb...@cb...] > Sent: Tuesday, March 26, 2002 4:26 PM > To: Game Dev Algorithms (E-mail) > Subject: RE: [Algorithms] object-space bump mapping > > > > So what you're doing there is something like generating L*N (where > L comes from a row in the texture matrix) and looking that up in > a 1d texture to get a remapping of L*N , right? > > same thing for H*N to do specular. > > At 03:54 PM 3/26/2002 -0800, Jon Watte wrote: > > > > I can't help feeling that there should be a better > lighting approach > > > given current hardware, that doesn't use the old hardware > lighting ideas > > > as building blocks. Maybe one day, I'll figure it one out! > > > >I use the texture coordinate generation from normal maps and > >reflection maps, and use the texture matrix to collapse to a > >1D or 2D texture look-up (in the cases where a cube map would > >be overkill). Works for all diffuse (NORMAL_MAP) and specular > >(REFLECTION_MAP) lighting equations I've tried so far. You > >can even use non-uniform scale to hack some non-uniform surface > >scattering properties. > > ---------------------------------------------------- > Charles Bloom cb...@cb... www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Tom F. <to...@mu...> - 2002-03-27 17:50:22
|
Generating things on demand and caching them is what us Unique Texturing freaks are on about. It rocks. :-) A fairly simple hack I did to increase perceived detail without needing a UT system was to use surface-local bumpmapping, but for scenery, I precalculated the surface-local light vector(s) and stored them with each mesh instance. So the actual mesh vertices do not have tangent vectors in them. Each instance of the mesh has a separate vertex stream of light_colour, light_vector (possibly multiple sets, but I found I only needed one for good enough lighting). This is the lighting contribution from static lightsources on static objects, so it never changes. This way you get all the nice bumpmapping stuff, but you're not burning texture memory replicating all the bumpmaps, and you're not chewing bus bandwidth and vertex shader power processing all the tangent-space stuff. Very few people notice that the dynamic lights are not bumpmapped, they're just done with normal vertex lighting. Possible enhancements: -For semi-dynamic lights, e.g. ones that you can break, turn on, turn off, whatever - keep the tangent vectors around in swap-outable-memory and when the lights change, recalculate the vectors & amounts. -Do this for middle & distance geometry. When you get close enough to see that dynamic lights are not being done correctly, swap in verts with tangent vectors and do it all properly. Only a small number of objects will require this. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Brian Marshall [mailto:bri...@bl...] > Sent: 27 March 2002 17:28 > To: gda...@li... > Subject: RE: [Algorithms] object-space bump mapping > > > C) Well, I wouldn't do it on your "background world" , I > would do it for > things like rocks, trees, etc. where you wouldn't really have tiling > bump-maps anyway. And, yes, I would probably generate them by > simplifying a high-poly mesh. That way, I'm making the maps > procedurally and there's no issues with seams, etc. > > Thinking about this, you don't have to limit things to props, > characters > etc. Think about compass point aligned walls. You could tile a normal > map along a north wall for example. For the same bump you'd need a > different map for other directions - east etc. You could also use some > sign changing in the code to reduce the combinations - to use the same > map on north/south for example. Sure, there are more combinations, but > it might be practical. For architecture, the success of Quake style > naturally aligned texturing leads me to believe that you'd get quite a > lot of re-use in this case. > > Another thing that could help would be to only generate the normal map > on demand and cache it. So you store a height field for the bump, then > run your Sobel filter or whatever on it to generate and cache normal > maps for specific directions as needed. Given that often architectural > features follow similar alignments locally, I think this kind of cache > could be quite successful. The spikes would be things like a > north east > corridor leading out of an otherwise north/south/east/west > aligned room. > > -Brian. > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Brian M. <bri...@bl...> - 2002-03-27 19:27:33
|
T) A fairly simple hack I did to increase perceived detail without needing a UT system was to use surface-local bumpmapping, but for scenery, I precalculated the surface-local light vector(s) and stored them with each mesh instance. So the actual mesh vertices do not have tangent vectors in them. Each instance of the mesh has a separate vertex stream of light_colour, light_vector (possibly multiple sets, but I found I only needed one for good enough lighting). This is the lighting contribution from static lightsources on static objects, so it never changes. B) Nice - I've found the same thing. The eye/brain often doesn't care if the effect is correct or not for lighting, so long as the elements it's expecting are there. I used this in some experiments with shader lodding. Picked a fixed direction for bump lighting in the distance. Used this to bake the bump effect into the lower mipmaps of the colour texture. Then you've still what looks like the bump effect in the distance, when you've switch all bump effects off. The biggest problem is that since you want to avoid the extra shader work, you discreetly change shaders to the simpler one. But the actual object is changing to the baked version on a per-texel basis due to the mip mapping hardware. I found that so long as I kept using the bump shader until the object was certain to be using the baked version it looked quite good. You got interference between the baked, and run time bump while changing over, but this didn't look too bad. -Brian. |
From: Tom F. <to...@mu...> - 2002-03-28 09:54:01
|
If using UT, then every surface on the world has a unique texel anyway, and it's pretty simple to use object (or even world) space bumpmapping everywhere (that isn't skinned). A nice feature of using UT is that you can do blending operations between heightfields, then convert the post-composite heightfield to a normal-map, then transform into world/object space. This means things like bullet holes and foorprints can use a MIN operation between their heightfield and the original surface heightfield, which is what you really want for these (e.g. a footprint in soil) Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Conor Stokes [mailto:cs...@tp...] > Sent: 28 March 2002 03:55 > To: gda...@li... > Subject: Re: [Algorithms] object-space bump mapping > > > I wonder if using UT + object space bump mapping on small > re-usable map > sections - think a segment of hallway with a pipe etc, nice curved and > detailed roof and walls that would be repeated - would be viable? > > Obviously for large unique areas, you need tangent space for it to be > viable. > > Still, its may be worth the speed up of using a simple transform pipe. > > Conor Stokes |
From: Jon W. <hp...@mi...> - 2002-03-26 22:59:41
|
> If you put the normals in default pose space ( ie animation > matrices are all > identity ), you can skin the light vectors, but skip the pose space -> > texture space step altogether. I believe the light vector interpolation will result in "crooked" light paths over the skin when your pose includes twist. I could be wrong, though, which would be great. Or it may not be that noticeable. You'll also have to be careful about interpolating across areas where there's a lot of directional change between two vertices, as we're not slerping quaternions here... Basically, I think fixing it up in the model to force tesselation is the best solution here. Cheers, / h+ |