gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1413)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Bass, G. T. <gt...@ut...> - 2000-08-14 16:17:53
|
> You could always avoid using normals. Often you can use various tricks to > get nice lighting without them. Conor, I'd like to know some of these tricks. Where can I find more info? Regards, Garett Bass gt...@ut... |
From: Vladimir K. <vka...@si...> - 2000-08-14 15:37:51
|
Conor, > You could always avoid using normals. Often you can use various tricks to > get nice lighting without them. Can you explain it a bit ? But In any case I will need normals for cube- and bumpmapping. Thanks vlad |
From: John S. <jse...@ho...> - 2000-08-14 13:44:57
|
OK, here's an update. I commented out the culling routines I had in the recurvsive portion of my patch splitter and added some simple culling to the patch setup routine and got a 40% drop in frame rate! Granted, the culling routines I had weren't filling the bill and would have to be slower to be correct, and my test data is optimal, consisting of spheres built from six patches (like a cube). Anyway, the problem with this seems to be that I've written my routine specifically to build up the edges of objects to reduce faceting, so I end up with a lot of "borderline" subpatches. So I either eat the loss in frame rate, or I'm back to square one. :-( John Sensebe jse...@ho... Quantum mechanics is God's way of ensuring that we never really know what's going on. Check out http://members.home.com/jsensebe to see prophecies for the coming Millennium! ----- Original Message ----- From: "Tom Forsyth" <to...@mu...> To: <gda...@li...> Sent: Saturday, August 12, 2000 2:42 PM Subject: RE: [Algorithms] Bicubic normals for a bicubic world > Hmmmm... OK. Tricky. > > Incidentally, I would question whether you want to bother with BFC of > subpatches. I would go with the philosophy that since most patches will be > wholly rejected or wholly accepted, the borderline cases are few in number. > However, if you're doing a BFC calculation + test per sub-patch, that seems > like a fair amount of work for _all_ visible patches, just to save a bit on > borderline patches. > > My instincts would be to BFC a top-level patch, then set up your Difference > Engine to the required tesselation level and just draw the whole thing, > letting the hardware do tri-level BFC. > > The edges where different levels of tesselation meet require a bit of > thought. "Cheating" by simply stitching them together with slim tris after > drawing each patch at different levels is a possible option, and very quick > (it's a second forward-difference engine, but only in one direction, along > the crease). But you tend to get lighting differences between the two > patches, which can be visible. > > You can also special-case the edges of the patch drawer so that it fans the > edge tris to match adjacent patches, which looks quite good. And it's pretty > good for speed, since you can do this by shrinking the full speed > tesselation by one on each edge that needs higher tesselation (fewer than > half the edges of the scene will need extra fanning(*)), then doing the > fanned edges with special case (but exceeding similar-looking) code. > > Anyway, there's probably some very good reason why you're doing it > recursively, so I've probably just been ranting. Sorry! > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > (*) Ah - almost true. True if you don't BFC the patches :-) Close enough for > performance considerations though. > |
From: Klaus H. <k_h...@os...> - 2000-08-14 12:06:17
|
Not sure what you mean by "horizontally dimmed banding", but I guess you are talking about the typical Gouraud shading artifacts. This can happen, if the light intensity at adjacent vertices differs very much (e.g. the two vertices shared vertices of two triangles are white, and the other two non-shared vertices of the two triangles are black). Personally, I use pre-lit textures or lightmaps (using Phong shading) to eliminate this effect, but the following algorithm to compute the vertex normals should already reduce (or maybe even eliminate) those seams (Thanks to Peter Dimov for the algorithm :). void VertexNormal( Vector3& normal, long col, long row, float gridSpacing) { float nx, nz, denom; if (col > 0 && col < m_fieldSize - 1) { nx = GetElev(col - 1, row) - (col + 1, row); } else if (col > 0) { nx = 2.0f * (GetElev(col - 1, row) - GetElev(col, row)); } else nx = 2.0f * (GetElev(col, row) - GetElev(col + 1, row)); if (row > 0 && row < m_fieldSize - 1) { nz = GetElev(col, row - 1) - GetElev(col, row + 1); } else if (row > 0) { nz = 2.0f * (GetElev(col, row - 1) - GetElev(col, row)); } else nz = 2.0f * (GetElev(col, row) - GetElev(col, row + 1)); gridSpacing *= 2.0f; denom = 1.0f / sqrt(nx * nx + gridSpacing * gridSpacing + nz * nz); normal.x = nx * denom; normal.y = gridSpacing * denom; normal.z = nz * denom; } "normal" is the unit vertex normal that is returned by the function. "(col, row)" represents the data point in the height field for which you want to compute the vertex normal, and "gridSpacing" is the spacing between two adjacent vertices (normally, gridSpacing is 1, unless you decide to scale the height field). Finally, GetElev() return a elevation of a particular data point in the height field. HTH, Niki ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Monday, August 14, 2000 8:22 AM Subject: [Algorithms] Terrain Normals > Hi, > > I've written a routine to read a bitmap heightfield and construct a terrain > mesh from it. I partition each set of four pixels into two triangles along > the upper-right-to-lower-left diagonal line. Therefore, for example, a 128 > x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my > method. For each triangle I caculate its unit face normal. For each vertex > of a triangle, I calculate its vertex normal by adding and averaging the > face normals of all triangles that share the vertex (in my case a vertex can > be shared by at most six triangles and at least one triganle) and then > normalize the averaged normal, which is used in redering. My problem is > that there are some *horizontal dimmed banding* effects along the triangle > edges of the mesh when rendering in D3D with light cast on it. Also, there > are very visible dimmed seams between triangles when light is cast on it. > All these artifacts seem to manifest only when light is cast in large angles > (i.e. not directly). Is it because the arrangement of triangles in my mesh > is too *uniform* (actually they are extremely uniform)? Or is my > calculation of vertex normals incorrect? > > Thanks in advance, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-14 06:28:31
|
Hi, I've written a routine to read a bitmap heightfield and construct a terrain mesh from it. I partition each set of four pixels into two triangles along the upper-right-to-lower-left diagonal line. Therefore, for example, a 128 x 128 bitmap will produce (128-1) x (128-1) x 2 = 32258 triangles using my method. For each triangle I caculate its unit face normal. For each vertex of a triangle, I calculate its vertex normal by adding and averaging the face normals of all triangles that share the vertex (in my case a vertex can be shared by at most six triangles and at least one triganle) and then normalize the averaged normal, which is used in redering. My problem is that there are some *horizontal dimmed banding* effects along the triangle edges of the mesh when rendering in D3D with light cast on it. Also, there are very visible dimmed seams between triangles when light is cast on it. All these artifacts seem to manifest only when light is cast in large angles (i.e. not directly). Is it because the arrangement of triangles in my mesh is too *uniform* (actually they are extremely uniform)? Or is my calculation of vertex normals incorrect? Thanks in advance, Pai-Hung Chen |
From: Pierre T. <p.t...@wa...> - 2000-08-14 04:24:19
|
> I'm in the middle of writing the code; I find writing code to implement an > algorithm helps me understand it. Actually for *that* one, I really would like to understand the whole theory before coding anything. I can't imagine how I would be able to fix a bug in the middle of this distance subalgorithm for example! :) ...brrr... Now to tell the truth, I already coded the search for supporting vertices with hill-climbing, but I don't think that's the difficult part ! All in all, the whole system seems very easy to implement once you get the ideas. If you look at the existing implementations, the code is always very small, even with all the optimizations and caches. I got 3 of them: (do you know some others?) - the enhanced GJK version 2.4 by Stephen Cameron. Without doubt the ugliest code I've ever seen :) But it was made on purpose.... - the SOLID 2.0 version. Not very much better, ahah! Comments are almost inexistant, and the use of caches and bit-arrays make the whole thing quite unclear. - the Graphics Gem IV version by Rich Rabbitz. Surprisingly the clearest version among those three - also the slowest since this is the basic GJK without any optimizations. I wish I had the book anyway (I just have the code) because it may contains some clear explanations about that famous Delta equation. I guess some of you guys have that book, hmmm? Anything useful there? > I haven't yet because the formula isn't really very clear to me. For > example, it says that delta_i (y_i) is 1, but it never says what > delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written > there. Do we even need those values? > It's pretty easy to use cramer's rule; I think I'll just rederive it to > see where the formula comes from. It makes some sense that the cofactors > can be written in terms of the cofactors from a matrix representing > smaller simplices. I think it's written to take advantage of the fact > that you want to test all subsets of Y and save computations if possible, > which unfortunately makes it less opaque. :) Agreed. But when I try to rederive it using Cramer's rule and all the determinant properties I can think of, there's always something wrong in the end anyway. I guess I'm missing something important, but I don't know what. I suspect it has something to do with that "k=min(Ix)". For the moment I don't see why Delta(j) doesn't depends on the value of k, and maybe you have to use that fact to derive the formula. Rough guesses. There are two other things bugging me: - in Van Den Bergen's paper about GJK, page 5: "This subset X is the largest of all nonempty Z....". Largest ? I would have said the smallest...? Typo ? - The delta formula is something like: Delta(j)(X+yj) = Sigma(i in IX) (Delta(i)(X) * (yi | yk - yi | yj)) ...where | is a dot product. Since k is said to be fixed, why don't they rewrite it that way : Delta(j)(X+yj) = (yk - yj) | Sigma(i in IX) (Delta(i)(X) * yi) ...which saves a lot of dot products, and maybe make all those caches useless ? Pierre |
From: Conor S. <cs...@tp...> - 2000-08-14 01:39:24
|
You could always avoid using normals. Often you can use various tricks to get nice lighting without them. Conor Stokes > Sounds like you have it right to me. But normals on a deformable mesh are tricky. > > There is one problem I know of with the technique. It only works for rotational joints. If you are using the bones for animation by translating them, the normals will not be correct since a translation can change local surface without a way to fix the normals. > > For instance think of using bones in a grid across a planar surface that you want to use to animate it like water. If you translate up one bone, the local surface will change. But nothing in the rotational part of the bone matrix has changed. You would need to fix up the normals by recomputing them. > > I have not seen any real serious problems with normals when they are only used with a hierarchical skeleton where the bones only rotate. That doesn't mean there are no problems. Linear interpolation of normals and then normalizing the result is probably not perfect. It would be interesting to test the accuracy by generating average normals then deforming a mesh and normals. Then you could compare those normals with ones that you regenerate. > > I also thought once about converting the normal to a quaternion, rotating, and then slerping between them. That may be more accurate, however, it really has never looked bad enough for me to investigate further. > > -Jeff > > At 06:22 PM 8/13/2000 +0300, you wrote: > >Hi, > > > >How to handle normals in skeletal animation? > > > >Currently I use this way : > >After loading model I take first frame and calculate all normals. > >Then I use inverted bone matrices to transform normals into > >some 'pretransformed' state and store it same as offset vectors > >(every vertex has list of pre normals for affected bones) > > > >At rendering time for every affected bone I transform this > >normals using current bone matrix, multiply to the weight, sum it > >and normalize. > > > >In result I get something hm.. :) > >In most cases it looks ok but sometimes it looks like some normals > >was calculated using incorrect bones. > > > >In any case I think should be much better way for this. > >Can anybody recommend something? > > > >Thanks > >vlad > > > > > > > >_______________________________________________ > >GDAlgorithms-list mailing list > >GDA...@li... > >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Jeff L. <je...@di...> - 2000-08-13 23:54:33
|
Sounds like you have it right to me. But normals on a deformable mesh are tricky. There is one problem I know of with the technique. It only works for rotational joints. If you are using the bones for animation by translating them, the normals will not be correct since a translation can change local surface without a way to fix the normals. For instance think of using bones in a grid across a planar surface that you want to use to animate it like water. If you translate up one bone, the local surface will change. But nothing in the rotational part of the bone matrix has changed. You would need to fix up the normals by recomputing them. I have not seen any real serious problems with normals when they are only used with a hierarchical skeleton where the bones only rotate. That doesn't mean there are no problems. Linear interpolation of normals and then normalizing the result is probably not perfect. It would be interesting to test the accuracy by generating average normals then deforming a mesh and normals. Then you could compare those normals with ones that you regenerate. I also thought once about converting the normal to a quaternion, rotating, and then slerping between them. That may be more accurate, however, it really has never looked bad enough for me to investigate further. -Jeff At 06:22 PM 8/13/2000 +0300, you wrote: >Hi, > >How to handle normals in skeletal animation? > >Currently I use this way : >After loading model I take first frame and calculate all normals. >Then I use inverted bone matrices to transform normals into >some 'pretransformed' state and store it same as offset vectors >(every vertex has list of pre normals for affected bones) > >At rendering time for every affected bone I transform this >normals using current bone matrix, multiply to the weight, sum it >and normalize. > >In result I get something hm.. :) >In most cases it looks ok but sometimes it looks like some normals >was calculated using incorrect bones. > >In any case I think should be much better way for this. >Can anybody recommend something? > >Thanks >vlad > > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Will P. <wi...@cs...> - 2000-08-13 21:03:40
|
> > I definitely agree that it is worth saving the dot products when running > > the sub algorithm once (because of the combinatorial nature), but I'm not > > sure that it's worth doing between calls to the subagorithm (although most > > of the points in the simplex do not change). > > Oh... I mixed things up, right. I forgot there were some caches inside *and* > outside the subalgorithm. Well. I'll save that for later, for the moment I > didn't write a single line of code. I'm in the middle of writing the code; I find writing code to implement an algorithm helps me understand it. > > That makes sense... I think I saw something similar in the gdc hardcore > > proceedings. > > Is there a GDC paper about GJK ? No, it was from http://www.gdconf.com/hardcore/physics.htm. I didn't attend, but my mentor from my summer position was kind enough to purchase the proceedings for me. > Gilbert claims it's "geometrically obvious".... :) But I'm not that good, I > can't even visualize the Minkowski difference of the two polytopes. Everything is obvious after it's understood. I wish scientific papers were written with that in mind. :) > I have some troubles with the recursive formula used to compute the deltas. > Where does it come from? I followed Gilbert's steps, in vain. There's always > something wrong in the end. Did you try to re-derive it ? I haven't yet because the formula isn't really very clear to me. For example, it says that delta_i (y_i) is 1, but it never says what delta_(not i) (y_i) is. Perhaps it's zero, but it's certainly not written there. It's pretty easy to use cramer's rule; I think I'll just rederive it to see where the formula comes from. It makes some sense that the cofactors can be written in terms of the cofactors from a matrix representing smaller simplices. I think it's written to take advantage of the fact that you want to test all subsets of Y and save computations if possible, which unfortunately makes it less opaque. :) Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Klaus H. <k_h...@os...> - 2000-08-13 20:16:58
|
Well, this is where it gets very much off-topic :) So I'll try to answer your question with a single mail. If you still don't get it, then please mail me off-line. First off, let's define: bpr = bytes_per_pixel * pixels_per_row Now assume that you want to round bpr *down* to the nearest 4, if and only if bpr is not a multiple of 4. This can be done, by setting the two least-significant bits of bpr to 0. Assuming that you use a postive 32-bit integer, you can achieve this as follows: bpr_rounded_down = bpr & (~3); <=> bpr_rounded_down = bpr & 0xFFFFFFFC; However, in order to compute the pitch, you need to round *up* to the nearest 4, if and only if bpr is not a multiple of 4. In order to do this, we first add 3 to bpr, and call the result bpr'. Examples: bpr = 16 => bpr' = bpr + 3 = 19 bpr = 17 => bpr' = bpr + 3 = 20 bpr = 18 => bpr' = bpr + 3 = 21 bpr = 19 => bpr' = bpr + 3 = 22 bpr = 20 => bpr' = bpr + 3 = 23 Now you use the above approach, and round bpr' *down* to the nearest 4 (which results in the pitch you are looking for): bpr = 16 => pitch = bpr' & (~3) = 16 bpr = 17 => pitch = bpr' & (~3) = 20 bpr = 18 => pitch = bpr' & (~3) = 20 bpr = 19 => pitch = bpr' & (~3) = 20 bpr = 20 => pitch = bpr' & (~3) = 20 So the formula for the pitch is: pitch = (bpr + 3) & (~3); Another way to implement this would be: pitch = (bpr + 3) - ((bpr + 3) % 4); <=> pitch = (bpr + 3) - ((bpr + 3) & 3); HTH, Niki PS: I'm not going to answer the question, why you need to add 3, instead of 4 :) PPS: Binary operations, like AND, OR, XOR, NOT, et cetera, are very useful, and it makes a lot of sense to learn more about them. ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Sunday, August 13, 2000 8:31 PM Subject: Re: [Algorithms] OT: How to read a bitmap of arbitrary size? > Hi, > > That explains everything! :-) Thanks a lot! But could you explain what's > the meaning of ((bytes_per_pixel * pixels_per_row) + 3) & (~3), specifically > the ...+ 3) & (~3) part? > > Thank you, > > Pai-Hung Chen |
From: John S. <jse...@ho...> - 2000-08-13 18:44:54
|
That part is simply rounding up to the nearest 4. John Sensebe jse...@ho... Quantum mechanics is God's way of ensuring that we never really know what's going on. Check out http://members.home.com/jsensebe to see prophecies for the coming Millennium! ----- Original Message ----- From: "Pai-Hung Chen" <pa...@ac...> To: <gda...@li...> Sent: Sunday, August 13, 2000 1:31 PM Subject: Re: [Algorithms] OT: How to read a bitmap of arbitrary size? > Hi, > > That explains everything! :-) Thanks a lot! But could you explain what's > the meaning of ((bytes_per_pixel * pixels_per_row) + 3) & (~3), specifically > the ...+ 3) & (~3) part? > > Thank you, > > Pai-Hung Chen > > > > For Windows BMP files, the pitch must be a multiple of 4 bytes. 5 pixels * > 3 > > bytes = 15 bytes is not a multiple of 4! In such cases, the BMP format > uses > > the next higher value (that is multiple 4) as the pitch. In this specific > > case, the next higher value that is a multiple of 4, is 16. Thus the > Windows > > BMP format uses a pitch of 16 bytes. > > So for a 5x5 BMP you have 5 * (((3 * 5) + 3) & (~3)) = 5 * 16 = 80 bytes. > > (Note, that the additional byte is unused). > > > > In order to compute the pitch for a bit-count >= 8, you can use the > > following formula: > > pitch = ((bytes_per_pixel * pixels_per_row) + 3) & (~3) > > > > biSizeImage then is: biSizeImage = pitch * num_pixel_rows > > > > HTH, > > Niki > > > > ----- Original Message ----- > > From: Pai-Hung Chen <pa...@ac...> > > To: <gda...@li...> > > Sent: Sunday, August 13, 2000 6:57 PM > > Subject: [Algorithms] OT: How to read a bitmap of arbitrary size? > > > > > > > Hi, > > > > > > I have been frustratingly bugged for hours and I decide to resort to the > > > list. I want to read a 24-bit bitmap heightfield. I can correctly read > > it > > > as long as the dimension of the bitmap file is in 2's power such as 4x4, > > > 8x8, 16x16, etc. (But 2x2 doesn't work?!) In these _normal_ cases, the > > > *biSizeImage* field of the BITMAPINFOHEADER structure gives me the > correct > > > size of image in bytes. However, if I create a bitmap of non-2's-power > > > dimension such as 5x5, 6x6, 7x7, etc., the *biSizeImage* field gives me > > some > > > bogus number. For example, 5x5 yields 80 bytes (5x5x3 = 75 bytes); 6x6 > > > yields 120 bytes (6x6x3 = 108 bytes); 7x7 yields 168 bytes (7x7x3 = 147 > > > bytes). Could somebody tell me what happened? > > > > > > Thanks in advance, > > > > > > Pai-Hung Chen > > > > > > > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Pai-Hung C. <pa...@ac...> - 2000-08-13 18:36:34
|
Hi, That explains everything! :-) Thanks a lot! But could you explain what's the meaning of ((bytes_per_pixel * pixels_per_row) + 3) & (~3), specifically the ...+ 3) & (~3) part? Thank you, Pai-Hung Chen > For Windows BMP files, the pitch must be a multiple of 4 bytes. 5 pixels * 3 > bytes = 15 bytes is not a multiple of 4! In such cases, the BMP format uses > the next higher value (that is multiple 4) as the pitch. In this specific > case, the next higher value that is a multiple of 4, is 16. Thus the Windows > BMP format uses a pitch of 16 bytes. > So for a 5x5 BMP you have 5 * (((3 * 5) + 3) & (~3)) = 5 * 16 = 80 bytes. > (Note, that the additional byte is unused). > > In order to compute the pitch for a bit-count >= 8, you can use the > following formula: > pitch = ((bytes_per_pixel * pixels_per_row) + 3) & (~3) > > biSizeImage then is: biSizeImage = pitch * num_pixel_rows > > HTH, > Niki > > ----- Original Message ----- > From: Pai-Hung Chen <pa...@ac...> > To: <gda...@li...> > Sent: Sunday, August 13, 2000 6:57 PM > Subject: [Algorithms] OT: How to read a bitmap of arbitrary size? > > > > Hi, > > > > I have been frustratingly bugged for hours and I decide to resort to the > > list. I want to read a 24-bit bitmap heightfield. I can correctly read > it > > as long as the dimension of the bitmap file is in 2's power such as 4x4, > > 8x8, 16x16, etc. (But 2x2 doesn't work?!) In these _normal_ cases, the > > *biSizeImage* field of the BITMAPINFOHEADER structure gives me the correct > > size of image in bytes. However, if I create a bitmap of non-2's-power > > dimension such as 5x5, 6x6, 7x7, etc., the *biSizeImage* field gives me > some > > bogus number. For example, 5x5 yields 80 bytes (5x5x3 = 75 bytes); 6x6 > > yields 120 bytes (6x6x3 = 108 bytes); 7x7 yields 168 bytes (7x7x3 = 147 > > bytes). Could somebody tell me what happened? > > > > Thanks in advance, > > > > Pai-Hung Chen > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > |
From: Charles B. <cb...@cb...> - 2000-08-13 18:12:03
|
Let me just point out how a metric can detect these case automatically. The basic error metric is something like "deviation from linear interpolation". Hence, if you have a linear case of (a,b,c) and you're thinking of removing b, then the error is (b - (a+c)/2). This error is related to the second derivative of whatever parameter you're measuring (eg. the curvature). Hence, if you have a triangle which is textured normally but highly tesselated, you will get zero error, because even though the UV's are different, they differ in a linear way, eg. there's no curvature in UV space. However, where you have a reflection, the curvature is quite large. For example, if u = abs(x), then u'' = 0 for x != 0, and u'' = -1 right at x=0. Thus, any collapse away from x=0 will be free, and any collapse across x=0 will be highly penalized. Of course you're right that artist intervention will always be more precise (if you have good artists!!), I just wanted everyone to be sure and understand that a good collapse error metric is very important, and will give you much higher quality meshes, by leaving the discontinuities in place where they're needed. At 10:01 AM 8/9/00 -0700, you wrote: >Now the problem as you would guess is you have the face with coordinates like (1,0) ----- (0,0) ---- (1,0). Obviously more granular then that. There are no texture coordinate discontinuities so the algorithm says it is ok. The surface of the face is roughly flat so the internal vertex and edges are good candidates for collapse. However, you then get the nice situation of (1,0) ------ (1,0). Doesn't look so hot. I have the artists tag reflected seam vertices as uncollapsable until very late in the order. ------------------------------------------------------- Charles Bloom cb...@cb... http://www.cbloom.com |
From: Klaus H. <k_h...@os...> - 2000-08-13 18:10:44
|
For Windows BMP files, the pitch must be a multiple of 4 bytes. 5 pixels * 3 bytes = 15 bytes is not a multiple of 4! In such cases, the BMP format uses the next higher value (that is multiple 4) as the pitch. In this specific case, the next higher value that is a multiple of 4, is 16. Thus the Windows BMP format uses a pitch of 16 bytes. So for a 5x5 BMP you have 5 * (((3 * 5) + 3) & (~3)) = 5 * 16 = 80 bytes. (Note, that the additional byte is unused). In order to compute the pitch for a bit-count >= 8, you can use the following formula: pitch = ((bytes_per_pixel * pixels_per_row) + 3) & (~3) biSizeImage then is: biSizeImage = pitch * num_pixel_rows HTH, Niki ----- Original Message ----- From: Pai-Hung Chen <pa...@ac...> To: <gda...@li...> Sent: Sunday, August 13, 2000 6:57 PM Subject: [Algorithms] OT: How to read a bitmap of arbitrary size? > Hi, > > I have been frustratingly bugged for hours and I decide to resort to the > list. I want to read a 24-bit bitmap heightfield. I can correctly read it > as long as the dimension of the bitmap file is in 2's power such as 4x4, > 8x8, 16x16, etc. (But 2x2 doesn't work?!) In these _normal_ cases, the > *biSizeImage* field of the BITMAPINFOHEADER structure gives me the correct > size of image in bytes. However, if I create a bitmap of non-2's-power > dimension such as 5x5, 6x6, 7x7, etc., the *biSizeImage* field gives me some > bogus number. For example, 5x5 yields 80 bytes (5x5x3 = 75 bytes); 6x6 > yields 120 bytes (6x6x3 = 108 bytes); 7x7 yields 168 bytes (7x7x3 = 147 > bytes). Could somebody tell me what happened? > > Thanks in advance, > > Pai-Hung Chen > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Ray G. <ra...@da...> - 2000-08-13 17:28:54
|
Sounds like you need to take the rowbytes-per-scanline into consideration. Ray Gardener Daylon Graphics Ltd. http://www.daylongraphics.com/products/leveller "heightfield modeling perfected" -----Original Message----- From: Pai-Hung Chen <pa...@ac...> To: gda...@li... <gda...@li...> Date: Sunday, August 13, 2000 10:10 AM Subject: [Algorithms] OT: How to read a bitmap of arbitrary size? >Hi, > >I have been frustratingly bugged for hours and I decide to resort to the >list. I want to read a 24-bit bitmap heightfield. I can correctly read it >as long as the dimension of the bitmap file is in 2's power such as 4x4, >8x8, 16x16, etc. (But 2x2 doesn't work?!) In these _normal_ cases, the >*biSizeImage* field of the BITMAPINFOHEADER structure gives me the correct >size of image in bytes. However, if I create a bitmap of non-2's-power >dimension such as 5x5, 6x6, 7x7, etc., the *biSizeImage* field gives me some >bogus number. For example, 5x5 yields 80 bytes (5x5x3 = 75 bytes); 6x6 >yields 120 bytes (6x6x3 = 108 bytes); 7x7 yields 168 bytes (7x7x3 = 147 >bytes). Could somebody tell me what happened? > >Thanks in advance, > >Pai-Hung Chen > > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-13 17:03:14
|
Hi, I have been frustratingly bugged for hours and I decide to resort to the list. I want to read a 24-bit bitmap heightfield. I can correctly read it as long as the dimension of the bitmap file is in 2's power such as 4x4, 8x8, 16x16, etc. (But 2x2 doesn't work?!) In these _normal_ cases, the *biSizeImage* field of the BITMAPINFOHEADER structure gives me the correct size of image in bytes. However, if I create a bitmap of non-2's-power dimension such as 5x5, 6x6, 7x7, etc., the *biSizeImage* field gives me some bogus number. For example, 5x5 yields 80 bytes (5x5x3 = 75 bytes); 6x6 yields 120 bytes (6x6x3 = 108 bytes); 7x7 yields 168 bytes (7x7x3 = 147 bytes). Could somebody tell me what happened? Thanks in advance, Pai-Hung Chen |
From: Vladimir K. <vka...@si...> - 2000-08-13 16:24:28
|
Hi, How to handle normals in skeletal animation? Currently I use this way : After loading model I take first frame and calculate all normals. Then I use inverted bone matrices to transform normals into some 'pretransformed' state and store it same as offset vectors (every vertex has list of pre normals for affected bones) At rendering time for every affected bone I transform this normals using current bone matrix, multiply to the weight, sum it and normalize. In result I get something hm.. :) In most cases it looks ok but sometimes it looks like some normals was calculated using incorrect bones. In any case I think should be much better way for this. Can anybody recommend something? Thanks vlad |
From: Pierre T. <p.t...@wa...> - 2000-08-13 05:22:32
|
> I think it just fits the definition of convex, like the ones given in > appendix A of Van Der Bergen's paper. Hmm, to me the definition of a convex function (not of a polytope) is that the second derivative has a constant sign. That way, to minimize the function we just have to assure the first derivative is zero. For example how would we *maximize* the function? ...in the exact same way, yep... Difference comes from the second derivative. There must be a very good reason in our case to assume there's no need to check for it, but I don't find it obvious. You must be right anyway: it must be related to the original definition of the function, but ... well, I just don't see why it is obvious :) Anyway this is a detail, really. > I definitely agree that it is worth saving the dot products when running > the sub algorithm once (because of the combinatorial nature), but I'm not > sure that it's worth doing between calls to the subagorithm (although most > of the points in the simplex do not change). Oh... I mixed things up, right. I forgot there were some caches inside *and* outside the subalgorithm. Well. I'll save that for later, for the moment I didn't write a single line of code. > That makes sense... I think I saw something similar in the gdc hardcore > proceedings. Is there a GDC paper about GJK ? >Mathematically, it make sense. It makes me wonder if there is any >intuition about it, or they just make that observation about >perpendicularity (is that a word?) for no particular reason. Gilbert claims it's "geometrically obvious".... :) But I'm not that good, I can't even visualize the Minkowski difference of the two polytopes. I have some troubles with the recursive formula used to compute the deltas. Where does it come from? I followed Gilbert's steps, in vain. There's always something wrong in the end. Did you try to re-derive it ? Pierre |
From: Akbar A. <sye...@ea...> - 2000-08-13 04:15:06
|
>please do not send messages to me anymore ? jeeeeeeeesh, for me, reading steve's messages is like a break from all the hard coding and reading i have to deal with. hehe check the bottom of the list message if you want off the list. peace. akbar A. "We want technology for the sake of the story, not for its own sake. When you look back, say 10 years from now, current technology will seem quaint" Pixars' Edwin Catmull. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Vianet Sent: Thursday, August 10, 2000 11:00 AM To: gda...@li... Subject: RE: [Algorithms] FPS Questions please do not send messages to me anymore -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Andrew Howe Sent: August 10, 2000 11:25 AM To: gda...@li... Subject: Re: [Algorithms] FPS Questions > staring at a bright white ZX Spectrum(*) screen while coding in basic that > (*) Home computer, contemporary (and clear superior, obviously) of the C64. > I think it was called a TRS80 on the other side of the pond? Fantastically > popular here in Britland. I think you might mean Timex Sinclair. The ZX-81 was a TS 1000 I think, but I don't know the number for the Spectrum. Andrew. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Zhang Z. S. <Zha...@ub...> - 2000-08-13 03:34:23
|
I think this method will be very slow unless you have some good technique to determine which particular color has been drawn into frame buffer. Have you tried the classical back face removal algo? Cant this just remove some of the invisible faces? For the papers, i think progressive mesh is good for network transmission. It makes a sequence of meshes from the original one, you start the transmission with the most simple version of the mesh, then send a little more data(some kind of forward differential method) later in each following frames to make it smoother and smoother. Any other good method in this field? i am a newbie too. :-) -----Original Message----- From: lor...@bb... [mailto:lor...@bb...] Sent: Saturday, August 12, 2000 1:05 AM To: gda...@li... Subject: [Algorithms] Question about 3D mesh transmission µo«H¤H: lordlee (Lord Lee) ¤é´Á: Sat Aug 12 01:04:45 2000 ¼ÐÃD: Question about 3D mesh transmission Hello: I have a question about the 3D mesh transmission across the network. A 3D mesh will be transmitted from a server to a client for display. If the position of the camera is fixed for that mesh, we can first transmit the visible polygons to shorten the latency of network transmission. The method to determine those visible polygons is to pre-render the 3D mesh and assign each polygon a different color with flat shading. Then we have the index of visible polygons in frame buffer. This can be done off line. After transmitting the visible polygons, we can go on to transmit the rest polygons to prevent from cracks in moving the camera. Does the method have any problem ? Have any paper discussed this before? _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: gl <gl...@nt...> - 2000-08-13 03:06:28
|
Hey - leave the Spec alone! Pick on a computer your own size. (i dunno, maybe a Cray or something)... I liked the loading feedback - always nice to see a good real-time visualisation of what goes on inside these strange boxes (you wouldn't believe how hard I tried to figure out how to change the two colours for my loaders ;) Of course nothing ever loaded when you wanted it to, usually failing at the last second... and the quickloaders, well, let's not talk about those (remember adjusting your tape head alignment endlessly)? Ah, memories... -- gl ----- Original Message ----- From: "Steve Baker" <st...@li...> To: <gda...@li...> Sent: Thursday, August 10, 2000 5:15 PM Subject: RE: [Algorithms] FPS Questions > On Thu, 10 Aug 2000, Tom Forsyth wrote: > > > Kim, you'll find that if you spend enough time in the pubs, the movies won't > > bother you any more. :-) > > It'll bother you - you just won't *remember* that it bothered you :-) > > > And yes, I think they can be trained. Maybe it's from years of sitting > > staring at a bright white ZX Spectrum(*) screen while coding in basic that > > I've become allergic to low frame rates. > > Your memory is leaky. The Spectrum was BLACK. You're thinking of the > ZX80 and ZX81. > > > (*) Home computer, contemporary (and clear superior, obviously) of the C64. > > I think it was called a TRS80 on the other side of the pond? Fantastically > > popular here in Britland. > > No! TRS-80 (Pronounced 'Trash-Eighty') was a Tandy/Radio-Shack machine > with a *real* keyboard, *real* display, etc, etc. V.Superior to anything > Sinclair produced. I owned three of them at one time! > > In the US, the ZX-80/81 was called "Timex 1000" or something like that, > I didn't think the Spectrubbish was ever marketted in the US. > > Hmmm - Sinclair computers - the memories... > > The wobbly RAM pack - the BlueTac to stop it sliding around > the desk - the 'dead flesh' feel to the so-called "keyboard" (on the > Spectrum). That the display was generated in software (on the ZX80) > - so the screen went blank whenever you ran a program or hit a key > or something. > > The adverts that said that you could control a nuclear power plant > with it. (That same advert showed the Spectrum displaying a Union > Jack - which it cannot do because it's graphics 'cells' could only > contain two colours per cell). > > The migrane-simulation it displayed as it loaded stuff from tape. > > The *requirement* that you use short-cut keys for all BASIC reserved > words - so their interpreter wouldn't have to actually parse the > source code! Having to use THREE shift keys to get to some of the > more "unusual" BASIC operators...unusual like '<' and '>' :-) > > <shudder> > > I remember the ZX80/ZX81 and Spectrum *very* well. Each was > truly a complete piece of shit - even by the low standards of > the time! > > You thought it was better than C64?!? I *don't* think so. > > Still - you could always upgrade to the Sinclair QL (Quick-Lashup) > with it's infamous 'microdrives'. (If you had two microdrives, > drive A couldn't read tapes written by drive B and vice-versa!) > > Clive Sinclair made an "Electric Car" too...can you say "Death Trap" ? > > :-) > > > Steve Baker (817)619-2657 (Vox/Vox-Mail) > L3Com/Link Simulation & Training (817)619-2466 (Fax) > Work: sj...@li... http://www.link.com > Home: sjb...@ai... http://web2.airmail.net/sjbaker1 > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: gl <gl...@nt...> - 2000-08-12 22:04:20
|
> P.S. John - we need to talk about that home page - that's going to get a > nomination from me on http://www.uglyinternet.com/ I'm afraid. Gotta be > cruel to be kind. Funky. Hey Tom, maybe you could do a VIPM 3D dancing mesh for yours... who cares about the art? Feel the technology! ;) -- gl |
From: Will P. <wi...@cs...> - 2000-08-12 21:42:29
|
> > provided have been simplified already from cramer's rule. I'm not seeing > > why the new "closest point in simplex to origin" is perpendicular to the > > affine combination of the points in simplex, so I've stopped at that point > > Maybe this can help: > http://www.codercorner.com/gjk00.jpg > http://www.codercorner.com/gjk01.jpg You have nice handwriting, and it's much better than mine. :) Mathematically, it make sense. It makes me wonder if there is any intuition about it, or they just make that observation about perpendicularity (is that a word?) for no particular reason. Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: Will P. <wi...@cs...> - 2000-08-12 21:06:58
|
> paper. Take the expression of f (lamda^2, lamda^3, ..., lamda^r). Then, if > you compute df / d^i you just find the expected orthogonality between the > expression of the closest point and any (Pi - P0), assuming Pi are the > original points. Well, I admit this is not a difficult one, but it sure is > somewhat delicate. I can see that derivation; it's somewhat like finding a jacobian matrix to do unconstrained optimization. I haven't gotten (yet) where they got the original expression f, though I understand their method for minimizing it. > For example, Van Der Bergen in his GJK paper takes this > orthogonality as granted, and at first it's very shocking. That's the one I was thinking of: it came out of nowhere. > While I'm at it : Gilbert wrote "Since f is convex...". How do we know that > function is convex ? I think it just fits the definition of convex, like the ones given in appendix A of Van Der Bergen's paper. > Well, I suppose you could probably invert the (41) matrix with the standard > inversion code of your matrix class without even using those nasty Delta > notations ! But since the algorithm takes a combinatoric approach, it would > need to invert up to 15 4x4 matrices (correct me if I'm wrong), most of the > involved terms beeing recomputed many times. I suppose those optimisations > are worth the pain - but once everything else works, of course. I definitely agree that it is worth saving the dot products when running the sub algorithm once (because of the combinatorial nature), but I'm not sure that it's worth doing between calls to the subagorithm (although most of the points in the simplex do not change). > BTW I think the intuitive reason for limiting the number of vertices to 4, > is that the closest point figure we're looking for can be something like: > - point vs point (2 vertices) > - edge vs point (3 vertices) > - edge vs edge (4 vertices) > - face vs point (4 vertices) That makes sense... I think I saw something similar in the gdc hardcore proceedings. Will |
From: Tom F. <to...@mu...> - 2000-08-12 20:30:09
|
I wasn't thinking of artifacts from any holes - they'll be fine - but of the abrupt changes in lighting. I too started off cheating this way, but you have a T-junction in terms of lighting shades, even if the actual few tiny pixels in the T-junction are filled, and if your tesselation is not fine enough, the eye can spot these changes in lighting. Which is why I moved to fanning the edge tris - the lighting change is spread over the width of the fan (which is usually a decent number of pixels, i.e. not 1), and it is far less noticeable. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. P.S. John - we need to talk about that home page - that's going to get a nomination from me on http://www.uglyinternet.com/ I'm afraid. Gotta be cruel to be kind. > -----Original Message----- > From: John Sensebe [mailto:jse...@ho...] > Sent: 12 August 2000 21:05 > To: gda...@li... > Subject: Re: [Algorithms] Bicubic normals for a bicubic world > > > Actually, I hadn't thought of simply rejecting whole patches that are > backfacing, and leaving the rest up to the driver... That may > be the wisest > solution. Thanks. > > I am "cheating" for different tesselation amounts right now, > and the results > are very good. With the algorithm I've written, the edge > match up almost > exactly, so the slim triangles are very slim indeed. They > simply cover tiny > holes that would show up due to floating-point inaccuracies. > > And hey, if your ranting gives me more insight into what I'm > doing, go right > ahead and rant! ;-) > > John Sensebe > jse...@ho... > Quantum mechanics is God's way of ensuring that we never > really know what's > going on. > > Check out http://members.home.com/jsensebe to see prophecies > for the coming > Millennium! > > > ----- Original Message ----- > From: "Tom Forsyth" <to...@mu...> > To: <gda...@li...> > Sent: Saturday, August 12, 2000 2:42 PM > Subject: RE: [Algorithms] Bicubic normals for a bicubic world > > > > Hmmmm... OK. Tricky. > > > > Incidentally, I would question whether you want to bother > with BFC of > > subpatches. I would go with the philosophy that since most > patches will be > > wholly rejected or wholly accepted, the borderline cases are few in > number. > > However, if you're doing a BFC calculation + test per > sub-patch, that > seems > > like a fair amount of work for _all_ visible patches, just > to save a bit > on > > borderline patches. > > > > My instincts would be to BFC a top-level patch, then set up your > Difference > > Engine to the required tesselation level and just draw the > whole thing, > > letting the hardware do tri-level BFC. > > > > The edges where different levels of tesselation meet > require a bit of > > thought. "Cheating" by simply stitching them together with > slim tris after > > drawing each patch at different levels is a possible > option, and very > quick > > (it's a second forward-difference engine, but only in one > direction, along > > the crease). But you tend to get lighting differences > between the two > > patches, which can be visible. > > > > You can also special-case the edges of the patch drawer so > that it fans > the > > edge tris to match adjacent patches, which looks quite > good. And it's > pretty > > good for speed, since you can do this by shrinking the full speed > > tesselation by one on each edge that needs higher > tesselation (fewer than > > half the edges of the scene will need extra fanning(*)), > then doing the > > fanned edges with special case (but exceeding similar-looking) code. > > > > Anyway, there's probably some very good reason why you're doing it > > recursively, so I've probably just been ranting. Sorry! > > > > Tom Forsyth - Muckyfoot bloke. > > Whizzing and pasting and pooting through the day. > > > > (*) Ah - almost true. True if you don't BFC the patches :-) > Close enough > for > > performance considerations though. > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |