gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1412)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: <ro...@do...> - 2000-08-15 15:33:08
|
Doug Chism wrote: >Does anyone know of any good code sources for fast clipping of >lines/triangles against OBBs? I guess the idea would be to pull the planes >out of the OBB and clip against them - something along that line of thought? >If you are keeping track of the Center, Corner, Axes, Normalized Axes, and >Half Size, whats the fastest way to get the 6 plane equations of the OBB >from that ( or if cheaper/faster clipping approach available is there info >on that? ) > Not sure exactly what is your distinction between "Axes" and "Normalized Axes", but let's suppose that by Normalized Axes you mean the three mutually orthogonal unit vectors A1, A2, A3 that give the orientation of the box, and you have these in world coordinates. These vectors are also, of course, unit normals to the six faces of the box. Similarly, "Center" means the world coordinate components of the center point of the box. And let w1, w2, w3 be the corresponding half-widths. Then the equations of the six face planes of the box are Ai dot P = (Ai dot Center) [+-] wi i=1,2,3. where P= (x,y,z) gives the world coordinates of a general point on the plane. Note that the vector Ai points to the outside of the box on the plane you get with the + sign and -Ai points to the outside of the box on the plane that you get with the - sign. If you do not already have nice software clipping code for the axis-aligned unit cube, or even if you do have it but "fast" is very important, then I would suggest ignoring Nicholas Serres' suggestion to transform the tri vertices to box coordinates. Rather I would just use a 3D Sutherland-Hodgman implementation for the triangle clipping. You can find pseudo code for the 2D S-H in Foley et al. Generalization to 3D is a pretty straightforward application of your elementary 3D vector function library. S-H consists just of successively clipping all the edges of the polygon against each of the face planes of the box, keeping track of the fact that your clipped triangle can become a quadrilateral, prntagon, or hexagon in the process. The pseudo code in Foley et al should be clear enough on that part. So how do you clip an edge against a plane? I take the trouble to address this elementary geometry problem only because there is a possible optimization that doesn't exist for the general convex clipping volume to which S-H applies. This optimization results from the fact that the clipping volume is a rectangular box, so the clipping planes fall into three parallel pairs, Letting V0, V1 be the position vectors of the endpoints of the edge, and plugging the parametric expression for the line containing the edge P = V0 + t(V1 - V0) into the equations for the two box faces perpendicular to the 1-axis, then doing a little elementary vector algebra manipulation gives. tplus = (A1 dot Center + w1 - A1 dot V0)/(A1 dot V1- A1 dot V0) tminus = (A1 dot Center - w1 - A1 dot V0)/(A1 dot V1- A1 dot V0) for the parameter values where the line crosses the two parallel planes. Note that the four dot products and sums and differences to be computed in the two expressions are the same (except for the sign of w1), so they only have to be computed once for the two planes. Moreover, if you are doing this for lots of triangles and edges, note that A1 dot Center is just a property of the box and so the same for all the triangles and edges. Now you just have to compare tplus and tminus with 0 and 1 and each other to get the edge as clipped by the two parallel planes: First, it is easy to see that if A1 dot (V1 - V0) = A dot V1 - A dot V0 is positive, then tplus >= tminus, else tplus <= tminus. Considering the case that A1 dot (V1 - V0) is positive, we have the possibilities: tminus <= 0 < 1 <= tplus: Edge lies entirely between the two planes, so is not clipped. tminus < tplus < 0 : Edge lies entirely outside the box, so drop it. 1 < tminus < tplus: Edge lies entirely outside the box, so drop it. timinus < 0 <= tplus <= 1: Clipped edge is [V0, V0+tplus(V1 - V0)] 0 <= tminus < tplus <= 1: Clipped edge is [V0+tminus(V1-V0), V0+tplus(V1-V0)]. 0<= tminus <=1 < tplus: Clipped edge is [V0+tminus(V1-V0), V1]. I leave the case that A1 dot (V1 - V0) is negative as an exercise for the reader. Same for understanding the meaning of the singular case that this dot product is zero. Take the result of the above clipping and clip that segment against the two A2 planes, then iterate again for the two A3 planes. The same optimization can be worked into the full 3D S-H algorithm for the triangle clipping. |
From: Pallister, K. <kim...@in...> - 2000-08-15 14:22:49
|
I'll be going. I'm giving a talk there too. I went last year. For a first year conference, it was pretty good. A majority of students/hobbyists, but a fair mix of pros as well. Kim Pallister We will find a way or we will make one. - Hannibal > -----Original Message----- > From: Graham S. Rhodes [mailto:gr...@se...] > Sent: Monday, August 14, 2000 2:35 PM > To: gda...@li... > Subject: [Algorithms] XGDC conference > > > Hello, > > I am just curious. How many here, if any, are planning to > attend the XGDC > conference hosted by Xtreme Games? Anyone go last year? > > I need to decide whether to push hard to get a paper done before > mid-September or just focus on preparing something for GDC. > > Graham Rhodes > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Peter D. <pd...@mm...> - 2000-08-15 10:05:39
|
> Also, could you briefly explain the reasoning of the normal-generating > algorithm you suggested as follows? A high-level, general idea is fine. :-) Brief explanation: The surface partial derivatives are approximated with central differences; the normal is computed by normalizing the cross product of the tangential vectors. More elaborate explanation: Assume that the height field represents a regularly sampled smooth surface y = F(x, z). To compute the tangential vectors to this surface we need to approximate the partial derivatives dF/dx and dF/dz. I have chosen to use central differences dF/dx ~= ( F(x+a) - F(x-a) ) / (2 a) instead of the more widely used forward differences dF/dx ~= ( F(x+a) - F(x) ) / a because the central difference is exact for degree 2 polynomials (ax^2+bx+c), while the forward/backward version is exact for linear functions (ax+b), and heightfields (including bumpmaps) are much better approximated by the former. So, we compute nx = height(x-1, z) - height(x+1, z) nz = height(x, z-1) - height(x, z+1) (note that the signs are reversed) and dF/dx ~= - nx / 2a dF/dz ~= - nz / 2a where a is the distance in world units between two adjacent vertices of the grid, i.e. the grid spacing. The tangential vectors are vx = (1, - nx / 2a, 0) vz = (0, - nz / 2a, 1) and their cross product is v = (nx / 2a, 1, nz / 2a). The only thing left is to normalize v. Note that to avoid the two divisions we may compute v' = 2av = (nx, 2a, nz) instead, because it's going to be normalized anyway. Hope this helps. -- Peter Dimov Multi Media Ltd. (code left for reference) > > void VertexNormal( > > Vector3& normal, > > long col, > > long row, > > float gridSpacing) > > { > > float nx, nz, denom; > > > > if (col > 0 && col < m_fieldSize - 1) > > { > > nx = GetElev(col - 1, row) - (col + 1, row); > > } > > else if (col > 0) > > { > > nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); > > } > > else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); > > > > if (row > 0 && row < m_fieldSize - 1) > > { > > nz = GetElev(col, row - 1) - GetElev(col, row + 1); > > } > > else if (row > 0) > > { > > nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); > > } > > else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); > > > > gridSpacing *= 2.0f; > > > > denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); > > > > normal.x = nx * denom; > > normal.y = gridSpacing * denom; > > normal.z = nz * denom; > > } |
From: Tom F. <to...@mu...> - 2000-08-15 08:49:52
|
Hi Chris - plain text posting please. Ta. Your first question seems to be a general VIPM question. Basically, you find the error associated with collapsing all possible edges in your mesh, then pick the one with the lower error, and collapse it. To collapse an edge, you simply move one vertex on the edge to the other, and remove degenerate tris. To expand, you do the opposite. Then recalculate all edge error values that have changed (since the collapse will have changed some of them), and again collapse the lowest-error edge. Repeat until there are no triangles left! This is not locked to a regular grid, or indeed any particular topology - except that most implementations don't deal with edges that have more than two triangles using them (though they do handle fewer than two). I assume you're talking about VIPM rendering of a terrain, rather than just discrete objects. Be aware that things are not cut and dried for this subject - as Charles says, there are a number of possible ways to VIPM terrain. Because VIPM is best when it takes a fixed "chunk" of something, then processes it (which is what makes it idea for discrete objects), it needs a certain adaptation to a continuous thing like a landscape. The method I favour is to start with a regular heightfield grid. In fact, any tesselation of your landscape data is just fine (including non-heightfield data) - the concepts are the same, it's just simpler to talk about the regular heightfield-type grid. Start with as high a resoloution as you want (i.e. the maximum detail that anyone will see). Then split this grid into chunks of a certain size (8x8 seems like a "good" number, but it all depends on your app). Add "flanges" - small polygonal edges that extend the chunk, sloping downwards underneath the neighbouring chunk (and so not normally visible - they get Z-buffered away), and VIPM each chunk. The flanges make sure you can have adjacent chunks at different levels of detail and still ensure there are no cracks between them. Remember - we're talking about cracks of only a few pixels wide, if that, so it may be kludgy, but the flange system will do just fine. And the really nice thing about the flange system, as opposed to the other possibilites that Charles mentions is that there is no need to communicate any information at all between chunks at runtime, which reduces CPU time even further. The extra overdraw you suffer is pretty minimal. The variation of this scheme is to "mipmap" or "quadtree" your levels of detail. If you have a view distance that is collossal, then whatever size chunk you pick, it will always be wrong in some cases. Either the chunks are too large (physically) and the VIPM is too coarse to reduce detail well at different distances. Or they are too small physically, and when you draw to the distance, you are collapsing them down to the maximum - two tris - but then drawing tens of thousands of them. The quadtree variation is similar to the quadtree texturing that some people use, and indeed they do very similar things, and can easily be tied together. You have fairly small chunks with flanges and you VIPM them as above. Then you take groups of chunks (classically you take 4 chunks together, but you could make it coarser and take 16 chunks together or more), combine them into one mesh (bin the flanges between them, but still have flanges at the new edges, and VIPM it as a single mesh. Do this hierarchially, and according to distance, switch to whichever quadtree level you want. There is also a good argument for controlling the VIPM collapse order of each level so that the transition between drawing a level, and drawing its four "child" levels is done when each uses the same mesh, so that the transition is seamless. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. -----Original Message----- From: Chris Brodie [mailto:Chr...@ma...] Sent: 15 August 2000 00:34 To: 'gda...@li...' Subject: [Algorithms] Understanding VIPM's I was going to send this e-mail directly to Mr Bloom for help but thought that other lurkers here might be able to benefit from the answer I receive. I've been reading Charles Bloom's website documentation of VIPM's and am starting to get a little more excited about them now that I realise how much it can simplify the rendering pipeline amongst other things... I have some question's though: -From what I've read I believe that the VIPM mesh would resemble something along the lines of a Triangulated Irregular Network that is locked to a grid. I don't however understand the edge collapsing and vertex insertion routines. I understand the error array that would come with each patch(chunk) but just not what to do with it. -The other thing I don't understand is the edge matching for terrain chunks. Under something like a triangle or quad ROAM implementation the edge mapping is easy to visualise but I cant quite grasp how the edges get formed. Does the chunk start at minimum resolution, ie a square then get built up as the error is implemented and the surface subdivided. If this is so at what point is an edge broken. If anyone is interested in helping me understand this I'm happy to write up a pretty graphical document that'll explain all this to other people new to the technique (partly so you guys don't have to answer the question again, partly so I know I understand the technique properly). Many thanks Chris Brodie |
From: Dave A. <da...@ga...> - 2000-08-15 06:23:45
|
Hi Graham, I went last year, and I'm planning on attending (and speaking) this year as well. Last year's conference had some obvious rough edges, but it was fun and many of the lectures were very useful. In general, I think that the XGDC has some good potential, especially as more experienced/qualified people are willing to present. Dave Astle ------------------------------------------------------------------------- Chairman and COO, GameDev.net, LLC: http://www.gamedev.net/ Lead Programmer, Myopic Rhino Games: http://www.myopicrhino.com Software Engineer, ROI Systems: http://www.roisys.com ------------------------------------------------------------------------- "Graham S. Rhodes" wrote: > > Hello, > > I am just curious. How many here, if any, are planning to attend the XGDC > conference hosted by Xtreme Games? Anyone go last year? > > I need to decide whether to push hard to get a paper done before > mid-September or just focus on preparing something for GDC. > > Graham Rhodes > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Pierre T. <p.t...@wa...> - 2000-08-15 06:22:35
|
> I agree... I'd definitely need to understand an algorithm before I can > debug it, but precision in algorithms is important (of course), and I find > it easier to be precise in C++ than in the univeral language of > mathematics (obscure movie reference). Ok, why not. I'd have said the opposite anyway :) The code version is easier to understand IMHO, but I don't see it more precise. Well, whatever. > I think even with comments and indenting kept in there it wouldn't be that > readable. I'm a big fan of variables that mean something. :) > I'll send you the article offline (it's like I walked to the library to > make a copy for you, and then walked over to France to hand it to you, > right?) Yeah! Great! Many thanks! (...a 40 Mo download, whoops! No pb, I get a fast connexion) As a matter of facts, there's some fresh meat inside. Rabbitz is the only one for example to relate the As matrix to Lagrange multipliers. Not really useful I guess, but worth writing. And he tells things in a clearer way than Gilbert for sure. Now I think I probably got it, by "brute-forcely" writing down all the possible matrices, determinants and cofactors: I must admit that formula works :) But I'm still very upset about it. I must find some more time to work on it, but I think for the moment I'll just go ahead. > using your notation: > > Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * ((yi | yk) - (yi | yj))) > > Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * (yi | (yk - yj))) > > I don't think you can do that last step. Ok, let's fix that, because it may be important. Either it works and nobody noticed it before (hence, I'm very happy), either it does not work and it probably means I'm not using the right delta formula, which would explain why I can't re-derive it (hence, I'm very happy as well because I found my bug :) Let's get rid of the Sigma, painful to write in ASCII. Am I wrong, or do we have the following pattern : D(j) = d(0) * y0|(yk - yj) + d(1) * y1|(yk - yj) + d(2) * y2|(yk - yj) + ...... with d(i) = scalar values, yi = vectors. The dot-product is associative: (rX)|Y = r(X|Y) Hence D(j) = (d(0)*y0)|(yk - yj) + (d(1)*y1)|(yk - yj) + (d(2)*y2)|(yk - yj) + ...... The dot-product is commutative: X|Y = Y|X Hence D(j) = (yk - yj)|(d(0)*y0) + (yk - yj)|(d(1)*y1) + (yk - yj)|(d(2)*y2) + ...... The dot-product is distributive: X|(Y+Z) = X|Y + X|Z And since yk and yj are fixed (they don't depend on i in the Sigma expression): D(j) = (yk - yj)| [ (d(0)*y0) + (d(1)*y1) + (d(2)*y2) + ...... ] Unless I'm blind and totally missing a key point somewhere, here it is. What do you think ? Pierre |
From: Klaus H. <k_h...@os...> - 2000-08-15 03:56:26
|
----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 11:55 PM Subject: Re: [Algorithms] Terrain Normals > > That's simply a greenish-blue alpha-polygon (no texture). It basically is > > one big square polygon that cuts through the whole terrain. The reason why > > that does not look too crappy is, because you see the lit terrain beneath > > the water. > > (1) Is it because "the terrain beneath the water is lit" that make the water > look good? Yes, because it looks as if the lit terrain beneath the water adds color-variations to the water surface. Normally, however, I'd use an alpha-blended texture (probably animated) for the water surface. The only reason why there's no texture in the screenshots is because I'm too lazy to write a portable image file reader for OpenGL. > The terrain is lit either with or without lightmap. Does that > make a difference? Yes it does make a difference. Sometime ago, I also tried to use Direct3D's/OpenGL's lighting engine to render *rough* non-LOD terrain mesh. I wasn't able to get rid of those Gouraud shading artifacts (as I already expected), even though the vertex normals were correct. Since then I'm using pre-lit surface textures, which completely eliminate these artifacts. In addition, you can use Phong shading and other nice lighting tricks. Also, pre-lit textures can make the terrain look as if there was a very high triangle count. > (2) How do you make the terrain mesh? It looks like ROAM. It's not ROAM. It's an implementation of S. Röttger's "Real-Time Generataion of Continuous Levels of Detail for Height Fields" algorithm. The implementation is sort of stupid though, as it uses single triangle fans (glVertex3f <g>) to render the mesh. I could have used a faster method, but I intend to make the source code publicly available in order to show an exact implementation of Röttger's paper, and things like vertex-buffered indexed triangle lists would just destroy the purpose of the sample. Which leads me to a question... Maybe someone can answer this off-line. Is there a better *portable* timer than clock()? HTH, Niki |
From: <ro...@do...> - 2000-08-15 03:47:19
|
Nicolas Serres wrote: >I personally store OBBs as : >- center >- normalized axis >- half width > >..... >If you are interested in plane equation, you almost have it in that format; >axis being plane normals; you have (a,b,c) of ax+by+cz+d=0. I prefer to >store them with normalized axis because I consider it is the best compromise >for my different applciations of OBBs. But if you almost always work with >the planes of the obb, you might want to replace box center+half width with >"d" values. Actually the "d" values of the equations of the planes of the box faces are the negatives of the six values (center dot normalized axis) [+-] half width where there are three axes and three corresponding half widths, assuming that you use "normalized axis" as the unit normal to each of the two box faces perpendicular to it. > It might be less easier to use if your OBB is frequently >transformed. > >But anyway, if you already have a functional software clipping pipeline, the >easiest way is to express your world coordinates in bbox coordinates (the >transformation is straightforward if you have obb axes). Using half >width/normalized axis makes you clip against (-half_width,+half_width). >Using "tweaked" (1 / half_width) non normalized axis makes you clip against >(-1,+1). The latests seems easier, especially if you already have a working >homogenous clipping pipeline. You just have to set w to 1, and care about >the (z<-w) issue (and not z<0 as usal)... It even allows you to use >extremely efficiently some buggy SIMD instructions that are not well suited >for classical frustrum clipping, but are perfectly suited for that. > >If you do that, you will need to store extra information (which is basically >the inverse of the world->box space transformation previously stored) to be >able to transform your clipped results into viewport space (which is, I >suppose your final goal). As my code that did the clip was >ultra-experiental-debug-stuff, I did a brute force matrix inversion, and I >definitely don't recommend it! > "Brute force"?? For inverting a matrix consisting of a rotation and translation? All you have to do is transpose a 3x3 and multiply a 3x3 by a 1x3. Nothing to solve, and nothing I would call "brutish". Actually, when I do stuff like that in my code, I don't use matrices at all, (and so have none to invert) but just the functions of my fast elementary vector algebra library. For example, given a world coordinate vector V of a vertex, its 1- component in box coordinates is just (V - box center) dot nomalized axis1 and similarly for the other two box coordinates of V. Going the other way, given a point in box coordinates (B1, B2, B3), its world x, y, z coordinates are the x, y, z coordinates of B1*normalized axis1 + B2*normalized axis2 + B3*normalized axis3 + box center. where I am making the natural assumption that you have the normalized axis vectors and box center in world coordinates. Matrices are handy in some circumstances, especially when you have to concatenate transformations, and you do have to use them as arguments to many 3D API calls, but don't get stuck in the rut of thinking you HAVE to construct them and invert them and whatnot to effect EVERY mapping that comes up in the computational geometry of your application. For lots of short, simple stuff, thinking in terms of your elementary vector algebra library, which is much closer to the geometric fundamentals, is much better than thinking in matrices. You end up doing exactly the same operations, but it is much easier to figure out what those operations need to be when you keep the vector geometric fundamentals in your mind, and just write them down in the form of calls to your vector algebra library. >----- Original Message ----- >From: "Doug Chism" <dc...@d-...> >To: <gda...@li...> >Sent: Monday, August 14, 2000 10:20 PM >Subject: [Algorithms] Clipping against OBBs > > >> Does anyone know of any good code sources for fast clipping of >> lines/triangles against OBBs? I guess the idea would be to pull the planes >> out of the OBB and clip against them - something along that line of >thought? >> If you are keeping track of the Center, Corner, Axes, Normalized Axes, and >> Half Size, whats the fastest way to get the 6 plane equations of the OBB >> from that ( or if cheaper/faster clipping approach available is there info >> on that? ) >> >> Doug Chism >> >> >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Alex D'A. <al...@fu...> - 2000-08-15 01:50:15
|
I'll be going again this year, along with some of the other people here. I went last year and it was pretty cool. Jonathan Blow gave a great talk on lighting in games, Reichart von Wolfshild had an incredible talk on porting games (which really covered all sorts of business aspects), and Andre gave a very polished talk on fuzzy logic. There were a bunch of other lectures, but those stick out in my mind. As a side bonus, the vintage computer fest was going on the other side of the convention center. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Graham S. Rhodes Sent: Monday, August 14, 2000 2:35 PM To: gda...@li... Subject: [Algorithms] XGDC conference Hello, I am just curious. How many here, if any, are planning to attend the XGDC conference hosted by Xtreme Games? Anyone go last year? I need to decide whether to push hard to get a paper done before mid-September or just focus on preparing something for GDC. Graham Rhodes _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Conor S. <cs...@tp...> - 2000-08-15 01:20:53
|
> > You could always avoid using normals. Often you can use various tricks to > > get nice lighting without them. > > Conor, > > I'd like to know some of these tricks. Where can I find more info? Just search around. Projective texturing for spot lights for instance is a common one. Quake2 (and 1 I think) used lighting from the surface below the character. Another common one is using a 3d texture to grab vertex light values from. (I think messiah used this). (This meant shadows as well). Or, you could just use light distance in the calculation to avoid proper light calcs. These are all pretty much hacks, but they work well enough to make it so no one knows the difference. Conor Stokes |
From: Nicolas S. <nic...@ch...> - 2000-08-15 00:06:31
|
I personally store OBBs as : - center - normalized axis - half width This is not the best choice for size, but althought I consider it gives the best speed compromise for my applications of OBBs. Althought my primary concern was culling against planes of the view frustrum and occluding volumes, I was also using them for test purposes as you do.. There is a pretty nice algorithm here http://www.cs.unc.edu/~hoff/research/vfculler/boxplane.html. This deals with testing an OBB against planes, so not your application, but I really like his approach, and it might apply to your case too, so it's worth reading. I used techniques based on this one very agressively and it is _really_ effective. If you are interested in plane equation, you almost have it in that format; axis being plane normals; you have (a,b,c) of ax+by+cz+d=0. I prefer to store them with normalized axis because I consider it is the best compromise for my different applciations of OBBs. But if you almost always work with the planes of the obb, you might want to replace box center+half width with "d" values. It might be less easier to use if your OBB is frequently transformed. But anyway, if you already have a functional software clipping pipeline, the easiest way is to express your world coordinates in bbox coordinates (the transformation is straightforward if you have obb axes). Using half width/normalized axis makes you clip against (-half_width,+half_width). Using "tweaked" (1 / half_width) non normalized axis makes you clip against (-1,+1). The latests seems easier, especially if you already have a working homogenous clipping pipeline. You just have to set w to 1, and care about the (z<-w) issue (and not z<0 as usal)... It even allows you to use extremely efficiently some buggy SIMD instructions that are not well suited for classical frustrum clipping, but are perfectly suited for that. If you do that, you will need to store extra information (which is basically the inverse of the world->box space transformation previously stored) to be able to transform your clipped results into viewport space (which is, I suppose your final goal). As my code that did the clip was ultra-experiental-debug-stuff, I did a brute force matrix inversion, and I definitely don't recommend it! ----- Original Message ----- From: "Doug Chism" <dc...@d-...> To: <gda...@li...> Sent: Monday, August 14, 2000 10:20 PM Subject: [Algorithms] Clipping against OBBs > Does anyone know of any good code sources for fast clipping of > lines/triangles against OBBs? I guess the idea would be to pull the planes > out of the OBB and clip against them - something along that line of thought? > If you are keeping track of the Center, Corner, Axes, Normalized Axes, and > Half Size, whats the fastest way to get the 6 plane equations of the OBB > from that ( or if cheaper/faster clipping approach available is there info > on that? ) > > Doug Chism > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Chris B. <Chr...@ma...> - 2000-08-14 23:36:49
|
I was going to send this e-mail directly to Mr Bloom for help but thought that other lurkers here might be able to benefit from the answer I receive. I've been reading Charles Bloom's website documentation of VIPM's and am starting to get a little more excited about them now that I realise how much it can simplify the rendering pipeline amongst other things... I have some question's though: -From what I've read I believe that the VIPM mesh would resemble something along the lines of a Triangulated Irregular Network that is locked to a grid. I don't however understand the edge collapsing and vertex insertion routines. I understand the error array that would come with each patch(chunk) but just not what to do with it. -The other thing I don't understand is the edge matching for terrain chunks. Under something like a triangle or quad ROAM implementation the edge mapping is easy to visualise but I cant quite grasp how the edges get formed. Does the chunk start at minimum resolution, ie a square then get built up as the error is implemented and the surface subdivided. If this is so at what point is an edge broken. If anyone is interested in helping me understand this I'm happy to write up a pretty graphical document that'll explain all this to other people new to the technique (partly so you guys don't have to answer the question again, partly so I know I understand the technique properly). Many thanks Chris Brodie |
From: gl <gl...@nt...> - 2000-08-14 23:02:47
|
Nvidia have a demo and paper on exacty that. Check out http://www.nvidia.com/Marketing/Developer/DevRel.nsf/pages/74C794552AB4A65E8 825687300757A8F (watch for line wraps) -- gl ----- Original Message ----- From: "Vladimir Kajalin" <vka...@si...> To: <gda...@li...> Sent: Monday, August 14, 2000 10:08 PM Subject: Re: [Algorithms] Terrain Normals > Hi All, > > I want to use cubemapping to create refraction's in transparent > objects. (glass vase) > There is no ready texgen mode for that so I need to calculate > tex coordinates myself. How can I do that? > I think it should be at least 3 tex coordinates. > > Thanks > Vlad > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 22:15:21
|
> That's simply a greenish-blue alpha-polygon (no texture). It basically is > one big square polygon that cuts through the whole terrain. The reason why > that does not look too crappy is, because you see the lit terrain beneath > the water. (1) Is it because "the terrain beneath the water is lit" that make the water look good? The terrain is lit either with or without lightmap. Does that make a difference? (2) How do you make the terrain mesh? It looks like ROAM. Pai-Hung Chen |
From: Vladimir K. <vka...@si...> - 2000-08-14 22:10:40
|
Hi All, I want to use cubemapping to create refraction's in transparent objects. (glass vase) There is no ready texgen mode for that so I need to calculate tex coordinates myself. How can I do that? I think it should be at least 3 tex coordinates. Thanks Vlad |
From: Graham S. R. <gr...@se...> - 2000-08-14 21:38:10
|
Hello, I am just curious. How many here, if any, are planning to attend the XGDC conference hosted by Xtreme Games? Anyone go last year? I need to decide whether to push hard to get a paper done before mid-September or just focus on preparing something for GDC. Graham Rhodes |
From: Klaus H. <k_h...@os...> - 2000-08-14 21:14:53
|
That's simply a greenish-blue alpha-polygon (no texture). It basically is one big square polygon that cuts through the whole terrain. The reason why that does not look too crappy is, because you see the lit terrain beneath the water. If it's important for you, the color of the polygon is: r = 0.01 g = 0.2 b = 0.4 a = 0.7 Niki ----- Original Message ----- From: Thomas Luzat <tho...@gm...> To: <gda...@li...> Sent: Monday, August 14, 2000 10:45 PM Subject: Re: [Algorithms] Terrain Normals > > Please ignore those 'lines' that look like a bug... these are texture > seams, > > and it's sort of difficult to get rid of them. > > > > http://www.thecore.de/TheCore/img1.jpg > > http://www.thecore.de/TheCore/img2.jpg > > BTW, how do you render the water surface in those pictures? > > > Thanks, > Thomas > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Thomas L. <tho...@gm...> - 2000-08-14 20:45:17
|
> Please ignore those 'lines' that look like a bug... these are texture seams, > and it's sort of difficult to get rid of them. > > http://www.thecore.de/TheCore/img1.jpg > http://www.thecore.de/TheCore/img2.jpg BTW, how do you render the water surface in those pictures? Thanks, Thomas |
From: Klaus H. <k_h...@os...> - 2000-08-14 20:35:10
|
Hi, I made two screen shots for you. These images are not supposed to look good, but they show that lightmaps don't introduce those Gouraud shading artifacts. The lightmap contains only a single color (at varying intensities), and two directional light sources have been used. There's exactly one texel per data point in the height field. Please ignore those 'lines' that look like a bug... these are texture seams, and it's sort of difficult to get rid of them. http://www.thecore.de/TheCore/img1.jpg http://www.thecore.de/TheCore/img2.jpg Niki ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 6:53 PM Subject: Re: [Algorithms] Terrain Normals > Hi, > > Thanks for the help! > > > Personally, I use pre-lit textures or lightmaps (using Phong shading) to > > eliminate this effect, but the following algorithm to compute the vertex > > normals should already reduce (or maybe even eliminate) those seams > (Thanks > > to Peter Dimov for the algorithm :). > > Wouldn't the same artifacts still be visible in the pre-lit lightmaps since > the lightmaps' color/intensity are still affected by the _faulty_ normals > when Gouraud shading is used (at least in D3D's case)? > > Also, could you briefly explain the reasoning of the normal-generating > algorithm you suggested as follows? A high-level, general idea is fine. :-) > Thank you. > > Pai-Hung Chen > > > > void VertexNormal( > > Vector3& normal, > > long col, > > long row, > > float gridSpacing) > > { > > float nx, nz, denom; > > > > if (col > 0 && col < m_fieldSize - 1) > > { > > nx = GetElev(col - 1, row) - (col + 1, row); > > } > > else if (col > 0) > > { > > nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); > > } > > else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); > > > > if (row > 0 && row < m_fieldSize - 1) > > { > > nz = GetElev(col, row - 1) - GetElev(col, row + 1); > > } > > else if (row > 0) > > { > > nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); > > } > > else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); > > > > gridSpacing *= 2.0f; > > > > denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); > > > > normal.x = nx * denom; > > normal.y = gridSpacing * denom; > > normal.z = nz * denom; > > } > > > > "normal" is the unit vertex normal that is returned by the function. > "(col, > > row)" represents the data point in the height field for which you want to > > compute the vertex normal, and "gridSpacing" is the spacing between two > > adjacent vertices (normally, gridSpacing is 1, unless you decide to scale > > the height field). Finally, GetElev() return a elevation of a particular > > data point in the height field. > > > > HTH, > > Niki > > > > ----- Original Message ----- > > From: Pai-Hung Chen <pa...@ac...> > > To: <gda...@li...> > > Sent: Monday, August 14, 2000 8:22 AM > > Subject: [Algorithms] Terrain Normals > > > > > > > Hi, > > > > > > I've written a routine to read a bitmap heightfield and construct a > > terrain > > > mesh from it. I partition each set of four pixels into two triangles > > along > > > the upper-right-to-lower-left diagonal line. Therefore, for example, a > > 128 > > > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using > my > > > method. For each triangle I caculate its unit face normal. For each > > vertex > > > of a triangle, I calculate its vertex normal by adding and averaging the > > > face normals of all triangles that share the vertex (in my case a vertex > > can > > > be shared by at most six triangles and at least one triganle) and then > > > normalize the averaged normal, which is used in redering. My problem is > > > that there are some *horizontal dimmed banding* effects along the > triangle > > > edges of the mesh when rendering in D3D with light cast on it. Also, > > there > > > are very visible dimmed seams between triangles when light is cast on > it. > > > All these artifacts seem to manifest only when light is cast in large > > angles > > > (i.e. not directly). Is it because the arrangement of triangles in my > > mesh > > > is too *uniform* (actually they are extremely uniform)? Or is my > > > calculation of vertex normals incorrect? > > > > > > Thanks in advance, > > > > > > Pai-Hung Chen > > > > > > > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Doug C. <dc...@d-...> - 2000-08-14 20:14:24
|
Does anyone know of any good code sources for fast clipping of lines/triangles against OBBs? I guess the idea would be to pull the planes out of the OBB and clip against them - something along that line of thought? If you are keeping track of the Center, Corner, Axes, Normalized Axes, and Half Size, whats the fastest way to get the 6 plane equations of the OBB from that ( or if cheaper/faster clipping approach available is there info on that? ) Doug Chism |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 19:46:23
|
Hi, How can your bilinear filtering method (as depicted below) be used with the terrain layout (as depicted at bottom, the same as mine)? Thank you, Pai-Hung Chen > I use a form of bilinear filtering to calculate my vertex normals, like so: > > +-----+ > |\ 1 /| > | \ / | > |4 X 2| > | / \ | > |/ 3 \| > +-----+ > > Where the normal of vertex X is the average of the normals of triangles > 1...4. Note that the above does _not_ represent the actual layout of my > terrain mesh, which has the following (probably quite familiar) layout: > > +-----+ > | /| /| > |/ |/ | > +--+--+ > | /| /| > |/ |/ | > +-----+ > > This seems to work when it comes to avoiding the gray bands you are > referring to. > > Hope this helps, > > Jim Offerman > > Innovade > - designing the designer > > ----- Original Message ----- > From: "Pai-Hung Chen" <pa...@ac...> > To: <gda...@li...> > Sent: Monday, August 14, 2000 8:22 AM > Subject: [Algorithms] Terrain Normals > > > > Hi, > > > > I've written a routine to read a bitmap heightfield and construct a > terrain > > mesh from it. I partition each set of four pixels into two triangles > along > > the upper-right-to-lower-left diagonal line. Therefore, for example, a > 128 > > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > > method. For each triangle I caculate its unit face normal. For each > vertex > > of a triangle, I calculate its vertex normal by adding and averaging the > > face normals of all triangles that share the vertex (in my case a vertex > can > > be shared by at most six triangles and at least one triganle) and then > > normalize the averaged normal, which is used in redering. My problem is > > that there are some *horizontal dimmed banding* effects along the triangle > > edges of the mesh when rendering in D3D with light cast on it. Also, > there > > are very visible dimmed seams between triangles when light is cast on it. > > All these artifacts seem to manifest only when light is cast in large > angles > > (i.e. not directly). Is it because the arrangement of triangles in my > mesh > > is too *uniform* (actually they are extremely uniform)? Or is my > > calculation of vertex normals incorrect? > > > > Thanks in advance, > > > > Pai-Hung Chen > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pallister, K. <kim...@in...> - 2000-08-14 19:42:59
|
I had similar problems that I assumed were just the gouraud shading, until I drew the normals, and then could see the problem clearly (a code bug). Draw the normals. Not hard, and it will help you. Kim Pallister We will find a way or we will make one. - Hannibal > -----Original Message----- > From: Pai-Hung Chen [mailto:pa...@ac...] > Sent: Sunday, August 13, 2000 11:23 PM > To: gda...@li... > Subject: [Algorithms] Terrain Normals > > > Hi, > > I've written a routine to read a bitmap heightfield and > construct a terrain > mesh from it. I partition each set of four pixels into two > triangles along > the upper-right-to-lower-left diagonal line. Therefore, for > example, a 128 > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 > triangles using my > method. For each triangle I caculate its unit face normal. > For each vertex > of a triangle, I calculate its vertex normal by adding and > averaging the > face normals of all triangles that share the vertex (in my > case a vertex can > be shared by at most six triangles and at least one triganle) and then > normalize the averaged normal, which is used in redering. My > problem is > that there are some *horizontal dimmed banding* effects along > the triangle > edges of the mesh when rendering in D3D with light cast on > it. Also, there > are very visible dimmed seams between triangles when light is > cast on it. > All these artifacts seem to manifest only when light is cast > in large angles > (i.e. not directly). Is it because the arrangement of > triangles in my mesh > is too *uniform* (actually they are extremely uniform)? Or is my > calculation of vertex normals incorrect? > > Thanks in advance, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Jim O. <j.o...@in...> - 2000-08-14 18:10:35
|
I use a form of bilinear filtering to calculate my vertex normals, like so: +-----+ |\ 1 /| | \ / | |4 X 2| | / \ | |/ 3 \| +-----+ Where the normal of vertex X is the average of the normals of triangles 1...4. Note that the above does _not_ represent the actual layout of my terrain mesh, which has the following (probably quite familiar) layout: +-----+ | /| /| |/ |/ | +--+--+ | /| /| |/ |/ | +-----+ This seems to work when it comes to avoiding the gray bands you are referring to. Hope this helps, Jim Offerman Innovade - designing the designer ----- Original Message ----- From: "Pai-Hung Chen" <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 8:22 AM Subject: [Algorithms] Terrain Normals > Hi, > > I've written a routine to read a bitmap heightfield and construct a terrain > mesh from it. I partition each set of four pixels into two triangles along > the upper-right-to-lower-left diagonal line. Therefore, for example, a 128 > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > method. For each triangle I caculate its unit face normal. For each vertex > of a triangle, I calculate its vertex normal by adding and averaging the > face normals of all triangles that share the vertex (in my case a vertex can > be shared by at most six triangles and at least one triganle) and then > normalize the averaged normal, which is used in redering. My problem is > that there are some *horizontal dimmed banding* effects along the triangle > edges of the mesh when rendering in D3D with light cast on it. Also, there > are very visible dimmed seams between triangles when light is cast on it. > All these artifacts seem to manifest only when light is cast in large angles > (i.e. not directly). Is it because the arrangement of triangles in my mesh > is too *uniform* (actually they are extremely uniform)? Or is my > calculation of vertex normals incorrect? > > Thanks in advance, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Will P. <wi...@cs...> - 2000-08-14 18:06:05
|
> Actually for *that* one, I really would like to understand the whole theory > before coding anything. I can't imagine how I would be able to fix a bug in > the middle of this distance subalgorithm for example! :) ...brrr... Now to > tell the truth, I already coded the search for supporting vertices with > hill-climbing, but I don't think that's the difficult part ! I agree... I'd definitely need to understand an algorithm before I can debug it, but precision in algorithms is important (of course), and I find it easier to be precise in C++ than in the univeral language of mathematics (obscure movie reference). > All in all, the whole system seems very easy to implement once you get the > ideas. If you look at the existing implementations, the code is always very > small, even with all the optimizations and caches. I got 3 of them: (do you > know some others?) > > - the enhanced GJK version 2.4 by Stephen Cameron. Without doubt the ugliest > code I've ever seen :) But it was made on purpose.... I think even with comments and indenting kept in there it wouldn't be that readable. I'm a big fan of variables that mean something. > - the SOLID 2.0 version. Not very much better, ahah! Comments are almost > inexistant, and the use of caches and bit-arrays make the whole thing quite > unclear. Yeah, the paper is the best description of the algorithm I've seen, but the set operations could have been encapsulated for easier reading with no loss in efficiency. > - the Graphics Gem IV version by Rich Rabbitz. Surprisingly the clearest > version among those three - also the slowest since this is the basic GJK > without any optimizations. I wish I had the book anyway (I just have the > code) because it may contains some clear explanations about that famous > Delta equation. I guess some of you guys have that book, hmmm? Anything > useful there? I'll send you the article offline (it's like I walked to the library to make a copy for you, and then walked over to France to hand it to you, right?) > > I haven't yet because the formula isn't really very clear to me. For > > example, it says that delta_i (y_i) is 1, but it never says what > > delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written > > there. > > Do we even need those values? You're right. I think they might not be needed. I just like to see all cases covered in a recursion. :) > - in Van Den Bergen's paper about GJK, page 5: > "This subset X is the largest of all nonempty Z....". Largest ? I would have > said the smallest...? Typo ? It would have to be... otherwise the set could grow very large or even worse: there would be no termination to the algorithm because it could cycle easier. > - The delta formula is something like: > > Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) > > ...where | is a dot product. > > Since k is said to be fixed, why don't they rewrite it that way : > Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) > > ...which saves a lot of dot products, and maybe make all those caches > useless ? using your notation: Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * ((yi | yk) - (yi | yj))) Delta(j)(X + yj) = Sigma(i in IX) (Delta(i)(X) * (yi | (yk - yj))) I don't think you can do that last step. Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 16:58:49
|
Hi, Thanks for the help! > Personally, I use pre-lit textures or lightmaps (using Phong shading) to > eliminate this effect, but the following algorithm to compute the vertex > normals should already reduce (or maybe even eliminate) those seams (Thanks > to Peter Dimov for the algorithm :). Wouldn't the same artifacts still be visible in the pre-lit lightmaps since the lightmaps' color/intensity are still affected by the _faulty_ normals when Gouraud shading is used (at least in D3D's case)? Also, could you briefly explain the reasoning of the normal-generating algorithm you suggested as follows? A high-level, general idea is fine. :-) Thank you. Pai-Hung Chen > void VertexNormal( > Vector3& normal, > long col, > long row, > float gridSpacing) > { > float nx, nz, denom; > > if (col > 0 && col < m_fieldSize - 1) > { > nx = GetElev(col - 1, row) - (col + 1, row); > } > else if (col > 0) > { > nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); > } > else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); > > if (row > 0 && row < m_fieldSize - 1) > { > nz = GetElev(col, row - 1) - GetElev(col, row + 1); > } > else if (row > 0) > { > nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); > } > else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); > > gridSpacing *= 2.0f; > > denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); > > normal.x = nx * denom; > normal.y = gridSpacing * denom; > normal.z = nz * denom; > } > > "normal" is the unit vertex normal that is returned by the function. "(col, > row)" represents the data point in the height field for which you want to > compute the vertex normal, and "gridSpacing" is the spacing between two > adjacent vertices (normally, gridSpacing is 1, unless you decide to scale > the height field). Finally, GetElev() return a elevation of a particular > data point in the height field. > > HTH, > Niki > > ----- Original Message ----- > From: Pai-Hung Chen <pa...@ac...> > To: <gda...@li...> > Sent: Monday, August 14, 2000 8:22 AM > Subject: [Algorithms] Terrain Normals > > > > Hi, > > > > I've written a routine to read a bitmap heightfield and construct a > terrain > > mesh from it. I partition each set of four pixels into two triangles > along > > the upper-right-to-lower-left diagonal line. Therefore, for example, a > 128 > > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > > method. For each triangle I caculate its unit face normal. For each > vertex > > of a triangle, I calculate its vertex normal by adding and averaging the > > face normals of all triangles that share the vertex (in my case a vertex > can > > be shared by at most six triangles and at least one triganle) and then > > normalize the averaged normal, which is used in redering. My problem is > > that there are some *horizontal dimmed banding* effects along the triangle > > edges of the mesh when rendering in D3D with light cast on it. Also, > there > > are very visible dimmed seams between triangles when light is cast on it. > > All these artifacts seem to manifest only when light is cast in large > angles > > (i.e. not directly). Is it because the arrangement of triangles in my > mesh > > is too *uniform* (actually they are extremely uniform)? Or is my > > calculation of vertex normals incorrect? > > > > Thanks in advance, > > > > Pai-Hung Chen > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > |