You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 






1
(28) 
2
(15) 
3
(4) 
4
(22) 
5
(22) 
6
(24) 
7
(4) 
8
(7) 
9
(6) 
10
(13) 
11
(4) 
12
(22) 
13
(55) 
14
(30) 
15
(24) 
16
(1) 
17
(2) 
18
(11) 
19
(28) 
20
(14) 
21
(18) 
22
(15) 
23
(7) 
24

25
(30) 
26
(26) 
27
(43) 
28
(26) 


From: James Johnson <James.J<ohnson@si...>  20020213 23:46:09

IIRC Numerical Recipes has funky licensing issues. Original Message From: Patrick M Lahey [mailto:Patrick.M.Lahey@...] Sent: Wednesday, February 13, 2002 6:33 AM To: gdalgorithmslist@... Subject: Re: [Algorithms] Inaccuracies in solving the quadratic formula on computers... Maybe I missed it but I'm surprised nodoby has refered you here: http://www.ulib.org/webRoot/Books/Numerical_Recipes/bookcpdf/c56.pdf It explains the numerically stable way to solve a quadratic equation. In practice, it only really matters in extreme circumstances but maybe that is your case? _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Chris Butcher (BUNGIE) <cbutcher@mi...>  20020213 22:25:34

> Original Message > From: Tatsujin [mailto:tatsujin@...] >=20 > I merely meant that e.g. the bezier tesselation isn't possible to do = before=20 > hand, e.g. a preprocessing step, but must be done as the terrain data = is=20 > loaded from disk and subsequently rendered. Because of the world being = > "huge" it isn't possible to fit in memory at once. >=20 > I'm going with the tile approach, paging in/out terrain along the = edges of=20 > the view as the viewpoint moves around. >=20 > Although, I didn't quite grasp what you meant by that downsampling=20 > business, I've made some checking with the wavelet things. However, I = know=20 > far too little about wavelets to understand if it was possible to use. = Is=20 > it possible to predetermine the size of a wavelet compressed tile? = Meaning=20 > by setting the required minimum frequency content to a certain level = and=20 > you get X bytes as a result? >=20 I see what you mean about preprocessing now... you want to minimize = the footprint in stored format, so any preprocess which increases data = size should be avoided. Regarding downsampling the heightfield, I simply meant that you can = analyze the wavelet compression of a given region to determine what = resolution it needs to be rendered at (for example, you might detect = that a 1024x1024 tile has very rolling hills, thus very small = contributions from the highfrequency wavelets, and based on your error = criterion could be represented well by 128x128 data). Then you can = either downsample the highresolution data using the filter of your = choice and recompress, or obtain a fairly good approximation by just = discarding the n least significant terms of the wavelet compressed data. Wavelet decompression algorithms are usually pretty fast as well.  Chris Butcher AI Engineer  Halo Bungie Studios butcher@... 
From: <castanyo@ya...>  20020213 22:15:38

I actually use your second option. I think that's called sobel filter and is mainly used for edge detection, anyway, for bumps it looks way better. Ignacio Castaño castano@... Evan Hart wrote: Actually, this brings up another good question of what is the best way to compute the normals from the bumpmap? I notice that you are only using 3 height map texels to compute a normal. I typically use 4 or 8. What are the tradeoffs of the different filters, and do they effect how it mipmaps? Here are your filter kernels: X: 0 1 0 0 1 0 0 0 0 Y: 0 0 0 0 1 1 0 0 0 The ones I typically use are: X: 0 1 0 0 0 0 0 1 0 or 1 2 1 0 0 0 1 2 1 Y: 0 0 0 1 0 1 0 0 0 or 1 0 1 2 0 2 1 0 1 Original Message From: Tom Forsyth [mailto:tomf@...] Sent: Wednesday, February 13, 2002 1:09 PM To: Ignacio Castaño; Algorithms Subject: RE: [Algorithms] normal map mipmaps There is no "right" way as far as I know. It's a big research topic. If you find a solution, could you let the DX9 chaps know  it's incredibly important for displacement mapping :) You're calculating the normal mipmaps wrong. Remember that lower levels not only have fewer pixels, but they are _further apart_. So on the top level you would do this: Vec3 norm; norm.x = height[x][y]  height[x][y+1]; norm.y = height[x][y]  height[x+1][y]; norm.z = z_scale; norm = norm.Normalise(); But on the next level down, you need to do: Vec3 norm; norm.x = height[x][y]  height[x][y+1]; norm.y = height[x][y]  height[x+1][y]; norm.z = z_scale * 2.0f; norm = norm.Normalise(); Otherwise you will get images that are far too crisp. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Ignacio Castaño [mailto:castanyo@...] > Sent: 13 February 2002 17:57 > To: Algorithms > Subject: [Algorithms] normal map mipmaps > > > Hi, > which is the way of correctly calculating normal maps > mipmaps? I've been > doing that by turning the bumpmap into a normalmap, > calculating the mipmaps > and renormalizing them. That worked pretty well, but someone > mentioned that > instead of that you should calculate the mipmaps of the > bumpmap and then > turn them into normalmaps, and that doesn't look very good, > the normal maps > of the mipmap chain are very differnt, smaller mipmaps look > sharper and > that's very noticeable. Does somebody have any suggestion? > > Thanks in advance, > > Ignacio Castaño > castano@... > > > > _________________________________________________________ > Do You Yahoo!? > Get your free @yahoo.com address at http://mail.yahoo.com > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com 
From: <castanyo@ya...>  20020213 22:12:30

hmm... yes, that makes a lot of sense. Ignacio Castaño castano@... Tom Forsyth wrote: There is no "right" way as far as I know. It's a big research topic. If you find a solution, could you let the DX9 chaps know  it's incredibly important for displacement mapping :) You're calculating the normal mipmaps wrong. Remember that lower levels not only have fewer pixels, but they are _further apart_. So on the top level you would do this: Vec3 norm; norm.x = height[x][y]  height[x][y+1]; norm.y = height[x][y]  height[x+1][y]; norm.z = z_scale; norm = norm.Normalise(); But on the next level down, you need to do: Vec3 norm; norm.x = height[x][y]  height[x][y+1]; norm.y = height[x][y]  height[x+1][y]; norm.z = z_scale * 2.0f; norm = norm.Normalise(); Otherwise you will get images that are far too crisp. _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com 
From: Joe Ante <joeante@li...>  20020213 20:24:33

> Just to make sure I'm not doing anything daft here's the code I'm using to > do moving sphere vertex collision. Well it's one variation of it anyway  > the dumb naive way that produces the most errors that is: Isnt a vertex moving sphere intersection the same as a Linesphere intersection? Where the sphere has the centerpoint of the vertex and the line is the line among the swept sphere is moving. You can find robust code for LineSphere intersection all over the place (eg. http://www.magicsoftware.com/) Joe Ante 
From: Jamie Fowlston <jamief@qu...>  20020213 20:23:02

that's not an epsilon you can get away with in accurate simulation physics, in my experience (nonFPS) Jamie Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Gribb, Gil Sent: 13 February 2002 19:08 To: 'christer_ericson@...'; gdalgorithmslist@... Subject: RE: [Algorithms] Inaccuracies in solving the quadratic formula on computers... Interesting. > I think the distinction isn't really between floats and fixed > point numbers > per se, but between working with tolerances and working with exact > arithmetic. > > While it is possible to work exactly with floats, it arguably > comes rather > natural to instead work inexactly, adding epsilons as you go. With > fixedpoint > or integerbased approaches the opposite arguably true: it is > easier to > make > sure your arithmetic is exact and that you're not losing any > bits at any > time. But you are losing bits with fixed point. Does someone have a nontrivial collision example that uses fixed point complete with proof of the guarantees? > If you're using exact arithmetic you can be certain that, > say, a polyline > representing the movement of your object will as a collision > detection test > against your polygonal world _guarantee_ that the object can > never fall out > of the world. If you're working with tolerances, you have no > such guarantee > unless you've very carefully chosen your epsilons with respect to your > domain. You don't have to choose them that carefully. Domain = first person shooter, epsilon on the order of a quarter inch. Gil > > This is why I find fixedpoint is advantageous over floats in some > situations, > i.e. it better facilitates implementing exact arithmetic. But > you're of > course > entitled to disagree. > > > > Christer Ericson > Sony Computer Entertainment, Santa Monica > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Tom Forsyth <tomf@mu...>  20020213 20:18:28

Oh, yes  bigger kernels are better. I was just throwing that one into = code because it's quick to type :) It's not actually very good because it = gives an offbyhalf error. I usually use this one: > 0 1 0 > 0 0 0 > 0 1 0 ...and the corresponding one. I have thought about using other kernels, because this one is a bit = harsh, and is very sensitive to the fact that there are usually only 8 bits of precision in the bumpmap, but never got around to experimenting with = any. I might give your other two a go  they look promising. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Evan Hart [mailto:ehart@...] > Sent: 13 February 2002 20:07 > To: Algorithms > Subject: RE: [Algorithms] normal map mipmaps >=20 >=20 >=20 > Actually, this brings up another good question of what is the=20 > best way to > compute the normals from the bumpmap?=20 >=20 > I notice that you are only using 3 height map texels to=20 > compute a normal. I > typically use 4 or 8. What are the tradeoffs of the=20 > different filters, and > do they effect how it mipmaps? >=20 > Here are your filter kernels: >=20 > X: > 0 1 0 > 0 1 0 > 0 0 0 >=20 > Y: > 0 0 0 > 0 1 1 > 0 0 0 >=20 > The ones I typically use are: >=20 > X: > 0 1 0 > 0 0 0 > 0 1 0 > or > 1 2 1 > 0 0 0 > 1 2 1 >=20 > Y: > 0 0 0 > 1 0 1 > 0 0 0 > or > 1 0 1 > 2 0 2 > 1 0 1 >=20 >=20 >=20 > Original Message > From: Tom Forsyth [mailto:tomf@...] > Sent: Wednesday, February 13, 2002 1:09 PM > To: Ignacio Casta=F1o; Algorithms > Subject: RE: [Algorithms] normal map mipmaps >=20 >=20 > There is no "right" way as far as I know. It's a big research=20 > topic. If you > find a solution, could you let the DX9 chaps know  it's incredibly > important for displacement mapping :) >=20 > You're calculating the normal mipmaps wrong. Remember that=20 > lower levels not > only have fewer pixels, but they are _further apart_. So on=20 > the top level > you would do this: >=20 > Vec3 norm; > norm.x =3D height[x][y]  height[x][y+1]; > norm.y =3D height[x][y]  height[x+1][y]; > norm.z =3D z_scale; > norm =3D norm.Normalise(); >=20 >=20 > But on the next level down, you need to do: >=20 >=20 > Vec3 norm; > norm.x =3D height[x][y]  height[x][y+1]; > norm.y =3D height[x][y]  height[x+1][y]; > norm.z =3D z_scale * 2.0f; > norm =3D norm.Normalise(); >=20 >=20 > Otherwise you will get images that are far too crisp. >=20 >=20 > Tom Forsyth  purely hypothetical Muckyfoot bloke. >=20 > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. >=20 >=20 >=20 > > Original Message > > From: Ignacio Casta=F1o [mailto:castanyo@...] > > Sent: 13 February 2002 17:57 > > To: Algorithms > > Subject: [Algorithms] normal map mipmaps > >=20 > >=20 > > Hi, > > which is the way of correctly calculating normal maps=20 > > mipmaps? I've been > > doing that by turning the bumpmap into a normalmap,=20 > > calculating the mipmaps > > and renormalizing them. That worked pretty well, but someone=20 > > mentioned that > > instead of that you should calculate the mipmaps of the=20 > > bumpmap and then > > turn them into normalmaps, and that doesn't look very good,=20 > > the normal maps > > of the mipmap chain are very differnt, smaller mipmaps look=20 > > sharper and > > that's very noticeable. Does somebody have any suggestion? > >=20 > > Thanks in advance, > >=20 > > Ignacio Casta=F1o > > castano@... > >=20 > >=20 > >=20 > > _________________________________________________________ > > Do You Yahoo!? > > Get your free @yahoo.com address at http://mail.yahoo.com > >=20 > >=20 > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 > >=20 >=20 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Evan Hart <ehart@at...>  20020213 20:07:46

Actually, this brings up another good question of what is the best way = to compute the normals from the bumpmap?=20 I notice that you are only using 3 height map texels to compute a = normal. I typically use 4 or 8. What are the tradeoffs of the different filters, = and do they effect how it mipmaps? Here are your filter kernels: X: 0 1 0 0 1 0 0 0 0 Y: 0 0 0 0 1 1 0 0 0 The ones I typically use are: X: 0 1 0 0 0 0 0 1 0 or 1 2 1 0 0 0 1 2 1 Y: 0 0 0 1 0 1 0 0 0 or 1 0 1 2 0 2 1 0 1 Original Message From: Tom Forsyth [mailto:tomf@...] Sent: Wednesday, February 13, 2002 1:09 PM To: Ignacio Casta=F1o; Algorithms Subject: RE: [Algorithms] normal map mipmaps There is no "right" way as far as I know. It's a big research topic. If = you find a solution, could you let the DX9 chaps know  it's incredibly important for displacement mapping :) You're calculating the normal mipmaps wrong. Remember that lower levels = not only have fewer pixels, but they are _further apart_. So on the top = level you would do this: Vec3 norm; norm.x =3D height[x][y]  height[x][y+1]; norm.y =3D height[x][y]  height[x+1][y]; norm.z =3D z_scale; norm =3D norm.Normalise(); But on the next level down, you need to do: Vec3 norm; norm.x =3D height[x][y]  height[x][y+1]; norm.y =3D height[x][y]  height[x+1][y]; norm.z =3D z_scale * 2.0f; norm =3D norm.Normalise(); Otherwise you will get images that are far too crisp. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Ignacio Casta=F1o [mailto:castanyo@...] > Sent: 13 February 2002 17:57 > To: Algorithms > Subject: [Algorithms] normal map mipmaps >=20 >=20 > Hi, > which is the way of correctly calculating normal maps=20 > mipmaps? I've been > doing that by turning the bumpmap into a normalmap,=20 > calculating the mipmaps > and renormalizing them. That worked pretty well, but someone=20 > mentioned that > instead of that you should calculate the mipmaps of the=20 > bumpmap and then > turn them into normalmaps, and that doesn't look very good,=20 > the normal maps > of the mipmap chain are very differnt, smaller mipmaps look=20 > sharper and > that's very noticeable. Does somebody have any suggestion? >=20 > Thanks in advance, >=20 > Ignacio Casta=F1o > castano@... >=20 >=20 >=20 > _________________________________________________________ > Do You Yahoo!? > Get your free @yahoo.com address at http://mail.yahoo.com >=20 >=20 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Mike Shaver <shaver@mo...>  20020213 19:29:16

Duncan Hewat wrote: > Someone should write a *good* > comprehensive, 'whole' book on the subject that fully explores different > methods and the various fudges/fixes and trade offs that can be applied > (and not skimp on the details or implementations and include a detailed > error/precision analysis...)... I'd buy it :) ... Have you looked at "Physics for Game Developers"? http://www.amazon.com/exec/obidos/ASIN/0596000065/ I've got a copy, but haven't got very far into it yet. So far, I'm pretty happy, though I'm no Chris Hecker. Mike 
From: Tatsujin <tatsujin@de...>  20020213 19:25:24

Chris Butcher (BUNGIE) wrote: >> Because the world is supposed to be as large as possible, any precomputations >> cannot be used, like tesselation of bezier patches. It has to be >> done in realtime. >> > I didn't really understand this caveat that you placed on possible > algorithms. But I've had success in the past [in a realtime app that > wasn't a game] with downsampling adaptively into a quadtree of blocks, > and then storing those blocks on disk in a lossy waveletcompressed > format. This can give you a multipleresolution heightmap very easily > but doesn't handle dynamic LOD at all. I merely meant that e.g. the bezier tesselation isn't possible to do before hand, e.g. a preprocessing step, but must be done as the terrain data is loaded from disk and subsequently rendered. Because of the world being "huge" it isn't possible to fit in memory at once. I'm going with the tile approach, paging in/out terrain along the edges of the view as the viewpoint moves around. Although, I didn't quite grasp what you meant by that downsampling business, I've made some checking with the wavelet things. However, I know far too little about wavelets to understand if it was possible to use. Is it possible to predetermine the size of a wavelet compressed tile? Meaning by setting the required minimum frequency content to a certain level and you get X bytes as a result? /Andre J 
From: Willmott, Andrew <AW<illmott@ma...>  20020213 19:12:18

> > you've missed my point. the absolute accuracy of floats _changes_. the > absolute accuracy of fixed point does _not_. as such, the > problems with > fixed points are universal, and things don't change as you > move around the > world. Isn't the main advantage of fixed point then simply that it *forces* you to keep your numbers in a "good" range? =) You might as well use asserts... One other drawback of fixed point for those of us on PCs: MSVCrap's buggyashell codegen for their 64bit int type. (Turning on the optimizer only leads to grief.) Cheers, Andrew 
From: Tom Forsyth <tomf@mu...>  20020213 19:08:05

In both OpenGL and D3D, I would simply use the card's own tetxure management. The card's driver knows far more about the hardware than you do, what is fast and what is not, and it will usually beat you quite easily. Plus, it's by far the simplest and easiest option. Even in D3D on really old cards that don't have driverbased management, the D3D layer will do management for you, and it's pretty good. It also generally knows more about the hardware than you do. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Odin Jensen [mailto:odin@...] > Sent: 13 February 2002 18:24 > To: 'Algorithms' > Subject: RE: [Algorithms] Good 'ole texture managent. Still a valid > option? pros/cons wanted > > > Ok. So, the old style of creating texture pages of say, 256x256 until > running out of memory, and then uploading smaller rects from > sysmem when > the need arises is slow? I thought it would be faster uploading a > smaller rect to a texture. (This is the good old style) > It also seems to cool, that you can have a fixed amount of > video memory > in use, and it's scalable if people have more video memory. > > Packing smaller textures into larger, is always a good call but it > requires some knowledge of the scene. You have to group textures for > surfaces close to each other. A bit more difficult, but could lead to > some good speed improvements. > > I just have to let go of the "my engine should be general and able to > handle all cases" thought I guess :) > > Odin 
From: Gribb, Gil <ggribb@ra...>  20020213 19:08:02

Interesting. > I think the distinction isn't really between floats and fixed > point numbers > per se, but between working with tolerances and working with exact > arithmetic. > > While it is possible to work exactly with floats, it arguably > comes rather > natural to instead work inexactly, adding epsilons as you go. With > fixedpoint > or integerbased approaches the opposite arguably true: it is > easier to > make > sure your arithmetic is exact and that you're not losing any > bits at any > time. But you are losing bits with fixed point. Does someone have a nontrivial collision example that uses fixed point complete with proof of the guarantees? > If you're using exact arithmetic you can be certain that, > say, a polyline > representing the movement of your object will as a collision > detection test > against your polygonal world _guarantee_ that the object can > never fall out > of the world. If you're working with tolerances, you have no > such guarantee > unless you've very carefully chosen your epsilons with respect to your > domain. You don't have to choose them that carefully. Domain = first person shooter, epsilon on the order of a quarter inch. Gil > > This is why I find fixedpoint is advantageous over floats in some > situations, > i.e. it better facilitates implementing exact arithmetic. But > you're of > course > entitled to disagree. > > > > Christer Ericson > Sony Computer Entertainment, Santa Monica > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Bretton Wade <brettonw@mi...>  20020213 19:03:41

Duncan posted his code for solving the quadratic equation. This looks like he is trying to solve the form of the equation (b + Sqrt (b^2  4ac))/2a, in other words the classic form. This method is well known for having numerical precision problems when a and/or c are small. I refer you to the section on solving Quadratic and Cubic equations in Numerical Recipes for the proper way to work around this difficulty. http://libwww.lanl.gov/numerical/bookcpdf/c56.pdf  Bretton Wade (aka Noz Moe King) in Bellevue, WA 
From: <christer_ericson@pl...>  20020213 18:56:33

Gil Gribb wrote: >If an algorithm is unstable floats, it is probably unstable with fixed point >or doubles too, but just harder to reproduce the bugs. With a little care >floats are sufficient for all game collision needs*. > >So, I think by talking about fixedpoint, the real issues are being missed. I think the distinction isn't really between floats and fixed point numbers per se, but between working with tolerances and working with exact arithmetic. While it is possible to work exactly with floats, it arguably comes rather natural to instead work inexactly, adding epsilons as you go. With fixedpoint or integerbased approaches the opposite arguably true: it is easier to make sure your arithmetic is exact and that you're not losing any bits at any time. If you're using exact arithmetic you can be certain that, say, a polyline representing the movement of your object will as a collision detection test against your polygonal world _guarantee_ that the object can never fall out of the world. If you're working with tolerances, you have no such guarantee unless you've very carefully chosen your epsilons with respect to your domain. This is why I find fixedpoint is advantageous over floats in some situations, i.e. it better facilitates implementing exact arithmetic. But you're of course entitled to disagree. Christer Ericson Sony Computer Entertainment, Santa Monica 
From: Jon Watte <hplus@mi...>  20020213 18:26:12

But the input variables have already lost their precision before you translate them. Or, alternatively, they will lose the precision when you translate back to actually do something about it. It's very easy to run into oscillation that way. Cheers, / h+ > Shouldn't it be possible to translate all your variables near the place > where the collisions occurs, to have maximum precision on one side, and > easytouseness and performances of floats ? 
From: Odin Jensen <odin@ps...>  20020213 18:23:52

Ok. So, the old style of creating texture pages of say, 256x256 until running out of memory, and then uploading smaller rects from sysmem when the need arises is slow? I thought it would be faster uploading a smaller rect to a texture. (This is the good old style) It also seems to cool, that you can have a fixed amount of video memory in use, and it's scalable if people have more video memory. Packing smaller textures into larger, is always a good call but it requires some knowledge of the scene. You have to group textures for surfaces close to each other. A bit more difficult, but could lead to some good speed improvements. I just have to let go of the "my engine should be general and able to handle all cases" thought I guess :) Odin 
From: Chris Butcher (BUNGIE) <cbutcher@mi...>  20020213 18:19:02

> Original Message > From: Tatsujin [mailto:tatsujin@...] >=20 > While we're on the subject of height maps, I have a little query. >=20 > What is the best storage method of a terrain hight map to get the = largest=20 > possible world in the least disk space, in a realtime application? > [snip] >=20 > Because the world is supposed to be as large as possible, any=20 > precomputations cannot be used, like tesselation of bezier patches. It = has=20 > to be done in realtime. >=20 I didn't really understand this caveat that you placed on possible = algorithms. But I've had success in the past [in a realtime app that = wasn't a game] with downsampling adaptively into a quadtree of blocks, = and then storing those blocks on disk in a lossy waveletcompressed = format. This can give you a multipleresolution heightmap very easily = but doesn't handle dynamic LOD at all.  Chris Butcher AI Engineer  Halo Bungie Studios butcher@... 
From: Tom Forsyth <tomf@mu...>  20020213 18:12:34

There is no "right" way as far as I know. It's a big research topic. If = you find a solution, could you let the DX9 chaps know  it's incredibly important for displacement mapping :) You're calculating the normal mipmaps wrong. Remember that lower levels = not only have fewer pixels, but they are _further apart_. So on the top = level you would do this: Vec3 norm; norm.x =3D height[x][y]  height[x][y+1]; norm.y =3D height[x][y]  height[x+1][y]; norm.z =3D z_scale; norm =3D norm.Normalise(); But on the next level down, you need to do: Vec3 norm; norm.x =3D height[x][y]  height[x][y+1]; norm.y =3D height[x][y]  height[x+1][y]; norm.z =3D z_scale * 2.0f; norm =3D norm.Normalise(); Otherwise you will get images that are far too crisp. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Ignacio Casta=F1o [mailto:castanyo@...] > Sent: 13 February 2002 17:57 > To: Algorithms > Subject: [Algorithms] normal map mipmaps >=20 >=20 > Hi, > which is the way of correctly calculating normal maps=20 > mipmaps? I've been > doing that by turning the bumpmap into a normalmap,=20 > calculating the mipmaps > and renormalizing them. That worked pretty well, but someone=20 > mentioned that > instead of that you should calculate the mipmaps of the=20 > bumpmap and then > turn them into normalmaps, and that doesn't look very good,=20 > the normal maps > of the mipmap chain are very differnt, smaller mipmaps look=20 > sharper and > that's very noticeable. Does somebody have any suggestion? >=20 > Thanks in advance, >=20 > Ignacio Casta=F1o > castano@... >=20 >=20 >=20 > _________________________________________________________ > Do You Yahoo!? > Get your free @yahoo.com address at http://mail.yahoo.com >=20 >=20 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 
From: Tom Forsyth <tomf@mu...>  20020213 17:59:34

Original Message From: Odin Jensen [mailto:odin@...] > After a lengthy discussion about world segment streaming, > packing of small textures etc. for our upcoming texture > manager(tm) I would like to ask a few questions. > > Does anyone still use this rather old technique? Indeed  all the time. > What about different formats? Different textures, unless you're on something like a PS2 that doesn't care. > Any good with large textures? (how large can pages be, > before oncard cache issues occur) If the texture is swizzled, the cache doesn't care how big the surrounding texture is, only how big the bit you are using is. However, you can get speed hits if you pakc unrelated textures together in a large texture, because you upload the whole thing, then only use a portion of it. > Is the different subrect uploads faster than the > individual texture stage changes it would take? Ah, if you're talking about having a few large textures on the card, and streaming lots of small ones through them, we don't use that. We pack our textures into large ones offline, and treat the large one as a homogenous whole. Doing subrect uploads is going to start hitting slow paths in code. [snip] > Thanks > > Odin Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. 
From: <castanyo@ya...>  20020213 17:49:42

Hi, which is the way of correctly calculating normal maps mipmaps? I've been doing that by turning the bumpmap into a normalmap, calculating the mipmaps and renormalizing them. That worked pretty well, but someone mentioned that instead of that you should calculate the mipmaps of the bumpmap and then turn them into normalmaps, and that doesn't look very good, the normal maps of the mipmap chain are very differnt, smaller mipmaps look sharper and that's very noticeable. Does somebody have any suggestion? Thanks in advance, Ignacio Castaño castano@... _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com 
From: Roman Weinberger <roman.weinberger@kf...>  20020213 17:36:00

Maybe you could create a temporary file that stores the heightinformation and access it at runtime via MemoryMapping (could be win32 only, not sure...)  Original Message  From: "Johan Hammes" <jhammes@...> To: "Tatsujin" <tatsujin@...>; "GDAlgorithms" <gdalgorithmslist@...> Sent: Wednesday, February 13, 2002 11:32 AM Subject: Re: [despammed] RE: [Algorithms] Height Maps > For one, I devide my terrain into a set of tiles, and for each only store to > a level that is appropriate for the amount of information in that tile. Flat > areas  low res, mountains  high res. > I work with real data sets, currently a 200x200km area at CapeTown on > 64meter resolution (40 Meg floating point values), easily reduces to about 3 > Meg with almost no visible degradation. (I admit, half of the area is sea > ...) > > Johan > > > > While we're on the subject of height maps, I have a little query. > > > > What is the best storage method of a terrain hight map to get the largest > > possible world in the least disk space, in a realtime application? > > I suppose procedural terrain (e.g. perlin noise) takes the least space of > > all, but then you have almost no control over how the terrain will look, > or? > > Then we have bezier patches, while the storage goes down considerably > while > > still retaining some control over the terrain shape are they usable in > > realtime? And how do you solve the problems of continuity across patches, > > as I can imagine arises? > > > > Because the world is supposed to be as large as possible, any > > precomputations cannot be used, like tesselation of bezier patches. It has > > to be done in realtime. > > > > What other storage methods are there? > > Maybe some kind of hybrid is possible as well? > > > > > > /Andre > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: <carl.adahl@ce...>  20020213 17:33:24

[Tom Forsyth] <I wonder if there might be a way to use the ROAM sequence of triangle meshing and splitting, but using VIPM to render them. The slowest <bit about the VIPM preprocessing is using something like the Quadric Error Metric to decide which edges to collapse in what order. >> I did just that, and it worked pretty well considering the nonoptimal triangulation. Each LOD step has on average a larger number of new triangles added (and fixed) than straight VIPM, but it wasn't=20 too bloody. Never did try realtime modification because of other reasons. For next time I'll probably=20 use the regular QEM/edge collapse method instead, if the data is static enough. / Carl =C5dahl 
From: Odin Jensen <odin@ps...>  20020213 17:11:47

After a lengthy discussion about world segment streaming, packing of small textures etc. for our upcoming texture managerT I would like to ask a few questions. Does anyone still use this rather old technique? What about different formats? Any good with large textures? (how large can pages be, before oncard cache issues occur) Is the different subrect uploads faster than the individual texture stage changes it would take? Well. There's probably more questions, but this is what I can remember right now :) I can imagine your world segments, could prebuild system memory texture pages, and replace entire texture pages in one pass on segment change. Textures for dynamic object, could always be kept in video memory, since you don't know when they might be use And a lot of other smart features seems possible. This seems like a damn good idea to me. Anyone got any input/field tests or do you think I'm crazy even considering this? Please write to the list. Thanks Odin 
From: Tom Forsyth <tomf@mu...>  20020213 16:08:30

I saw it in an ATI demo way before the nVidia one. Jason  did you chaps think it up yourselves? Coincidentally, I've just been thinking about where I first saw the vanilla implementation of VIPM. It wasn't Jan Svarovsky  he heard it from someone else. And it wasn't Charles Bloom  I didn't meet him till afterwards. Of course, Jan could have heard it from Charles :) Anyone know where that implementation came from? I don't think it was the one Hoppe used in his paper. (attention  this thread is being hijacked  nobody move, and nobody gets hurt) Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Adam Moravanszky [mailto:amoravanszky@...] > Sent: 13 February 2002 15:56 > To: gdalgorithmslist@... > Subject: [Algorithms] Shadow Volumes in Vertex Shader source > > > Hi. > > Can anyone point out to me the clever fellow who first > thought up a way to > extrude shadow volume geometry in a vertex shader? I would > like to credit > him* in a paper of mine. I first saw the trick in a nVidia > demo, but it > didn't credit an author. I don't know if this result was > ever published in > a more wordy form than the demo, but if so, a reference to > that wouldn't > hurt either. > > Thanks, > > Adam >  >  Adam Moravanszky > http://n.ethz.ch/student/adammo/ > > > (* chauvinistic assumption) > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 