You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

2017 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1
(2) 
2

3
(6) 
4
(3) 
5
(9) 
6
(12) 
7
(54) 
8
(17) 
9
(7) 
10
(18) 
11
(13) 
12
(17) 
13
(22) 
14
(6) 
15
(3) 
16
(1) 
17
(3) 
18
(4) 
19
(6) 
20
(9) 
21
(3) 
22

23

24

25

26

27
(2) 
28
(4) 
29

30
(7) 
31
(21) 





From: IvanAssen Ivanov <ivanassen@gm...>  20050108 22:14:05

> For instance I don't see why the way DXT handle 4x4 pixel is nice > (except that of course it's very fast) You nailed it on the head: it's nice, because it's very fast, and for no other reason. From a hardware designer's perspective things are VERY different than from the perspective of a software developer, let alone an academic researcher who thinks he's working for the benefit of software developers. If you think DXTn is primitive, illogical etc., wait until you see the Zbuffer compression schemes :) 
From: Aras Pranckevicius <nearaz@gm...>  20050108 14:22:42

> Are you aware of some recent research/papers about these stuff? > Also technical papers about DXT/3Dc doesn't give a clear analysis > of this memory bandwidth problem, and does not give some insights > about why what they are doing is better than previous approach, etc. > For instance I don't see why the way DXT handle 4x4 pixel is nice > (except that of course it's very fast) I would see many other > way to process 4x4 pixels, including a bunch of linear transforms > (let say a haar transform). DXT is in fact a variant of "block truncation coding" (BTC) family of algorithms. Take fixed size blocks, and do lossy compression that produces fixed amount of bits for each block. While this isn't ideally optimal, the huge benefit is the fixed compression ratio, so things like "where is the data for texel at 374,58?". DCT, Haar and a bunch of others operate differently  they convert to some frequency domain, and most compression comes from discarding lowamplitude frequencies. As a result, the compression ratios for various parts of texture are different, so you'd need some sort of lookup tables to be able to "find" the data for arbitrary texels.  Aras 'NeARAZ' Pranckevicius http://nesnausk.org/nearaz/ 
From: <gabriel.peyre@gm...>  20050108 13:41:44

On Fri, 7 Jan 2005 22:22:55 0800, Jay Stelly <Jay@...> wrote: > Also, don't forget filtering. > The pixel shader example only gets you point sampling. > You've still got to resample that texture map to the projection under the pixel. Are you speaking of mipmapping? If so, wavelet transform is designed for that. If you take the wavelet partial reconstruction at given scale you get a very nice mip mapping (much better than just block averaging which is haar transform) at not additional cost and no additional storage (since wavelet only store the difference between scales, you don't have to store the different mip map). And probably one could design clever algorithms to handle more complex situation (or more specific needs) in hardware. I really believe it's the perfect fit for real time rendering, but of course there is still a lot of work ... I think it's a bit the same situation in CAD commercial software (where pointwise queries is also the holy grail). Everybody use the spline representation (with thousand of control points per patches) whereas you could simply turn this spline monoscale representation into a multiscale representation using spline wavelets. You save huge memory, this would be simpler to connect with subdivision surfaces (some schemes just give you splines), but of course position evaluation is a bit more expensive (log(n) instead of constant) but using some locality assumption on the way the user queries the surfaces, you'll certainly get fast algorithms (with automatic LOD management). Sorry for this long digression. Best regards Gabriel 
From: <gabriel.peyre@gm...>  20050108 12:01:16

> Bear in mind that the whole point of texture compression is to reduce the > memory bandwidth cost of a single texel read. It is NOT to reduce the amount > of video memory taken up by a texture  that is a side benefit. And hardware > reads one texel at a time. Ok, thanks a lot for this clear answer. In fact decompressing a single pixel in a wavelet scheme typically cost log(n) operations (ok with some big constant) and I believe it can be turned into a cache/memory efficient method (decompressing chunk of contiguous data etc). Are you aware of some recent research/papers about these stuff? Also technical papers about DXT/3Dc doesn't give a clear analysis of this memory bandwidth problem, and does not give some insights about why what they are doing is better than previous approach, etc. For instance I don't see why the way DXT handle 4x4 pixel is nice (except that of course it's very fast) I would see many other way to process 4x4 pixels, including a bunch of linear transforms (let say a haar transform). sorry if this discussion is OT, I would be very interested in continuing it private if needed. Gabriel 
From: Eric Lengyel <lengyel@te...>  20050108 06:34:12

Hi  The equations in the Gamasutra article, while correct, are more complicated than they need to be. Equation (38) can be replaced by the much simpler formula P = L  rN. If you have the second edition of Mathematics for 3D Game Programming and Computer Graphics, an updated derivation appears in Section 10.7.  Eric Lengyel  Original Message  From: "Ignacio Castaño" <castano@...> To: <gdalgorithmslist@...> Sent: Friday, January 07, 2005 10:44 AM Subject: Re: [Algorithms] Light Scissors Rect Algorithm > You can find a description of the algorithm at gamasutra: > > http://www.gamasutra.com/features/20021011/lengyel_06.htm > > And below is my implementation based on that description: > > /** Compute the screenspace bounds of the light. > * > * This is just Eric Lengyel scissor optimization: > * http://www.gamasutra.com/features/20021011/lengyel_06.htm > **/ > bool PiLight::ComputeLightBounds( const PiViewport * viewport, BBox * > box ) const { > piDebugCheck( viewport>cam!=NULL ); > piDebugCheck( info!=NULL ); > > float r = info>scale; > Vec3 l; l.Sub( info>origin, viewport>cam>pos ); > > float D = sqrt( SQ(l.x) + SQ(l.y) + SQ(l.z) ); > > // Compute depth bounds. > float z = Vec3DotProduct( l, viewport>cam>dir ); > box>mins.z = z  r; > > if( info>type != PI_LTP_VOLUMETRIC ) { > box>maxs.z = z + r; > } > > if( D <= r ) { > // camera inside light > return true; > } > > // Transform L to eye space > Vec3 L; > viewport>cam>eye.TransformVec3( l, L ); > > > // Vertical planes: T = <Nx, 0, Nz, 0> > D = (SQ(L.x)  SQ(r) + SQ(L.z)) * SQ(L.z); > if( D >= 0 ) { > > float Nxa = (r*L.x  sqrt(D)) / (SQ(L.x) + SQ(L.z)); > float Nxb = (r*L.x + sqrt(D)) / (SQ(L.x) + SQ(L.z)); > > float Nza = (r  Nxa*L.x) / L.z; > float Nzb = (r  Nxb*L.x) / L.z; > > float Pza = (SQ(L.x) + SQ(L.z)  SQ(r)) / (L.z  (Nza/Nxa)*L.x); > float Pzb = (SQ(L.x) + SQ(L.z)  SQ(r)) / (L.z  (Nzb/Nxb)*L.x); > > // Tangent a > if( Pza < 0 ) { > float xa = 2 * Nza / (Nxa * viewport>cam>width); > xa = viewport>window.x0 + (xa+1) * 0.5f * (viewport>window.x1  > viewport>window.x0); > > float Pxa =  Pza * Nza / Nxa; > > if( Pxa > L.x ) { > box>maxs.x = piMax( box>mins.x, xa ); > } > else { > box>mins.x = piMin( box>maxs.x, xa ); > } > } > > // Tangent b > if( Pzb < 0 ) { > float xb = 2 * Nzb / (Nxb * viewport>cam>width); > xb = viewport>window.x0 + (xb+1) * 0.5f * (viewport>window.x1  > viewport>window.x0); > > float Pxb =  Pzb * Nzb / Nxb; > > if( Pxb > L.x ) { > box>maxs.x = piMax( box>mins.x, xb ); > } > else { > box>mins.x = piMin( box>maxs.x, xb ); > } > } > } > > if( box>mins.x >= box>maxs.x ) { > return false; > } > > > // Horizontal planes: T = <0, Ny, Nz, 0> > D = (SQ(L.y)  SQ(r) + SQ(L.z)) * SQ(L.z); > if( D >= 0 ) { > > float Nya = (r*L.y  sqrt(D)) / (SQ(L.y) + SQ(L.z)); > float Nyb = (r*L.y + sqrt(D)) / (SQ(L.y) + SQ(L.z)); > > float Nza = (r  Nya*L.y) / L.z; > float Nzb = (r  Nyb*L.y) / L.z; > > float Pza = (SQ(L.y) + SQ(L.z)  SQ(r)) / (L.z  (Nza/Nya)*L.y); > float Pzb = (SQ(L.y) + SQ(L.z)  SQ(r)) / (L.z  (Nzb/Nyb)*L.y); > > // Tangent a > if( Pza < 0 ) { > float ya = 2 * Nza / (Nya * viewport>cam>height); > ya = viewport>window.y0 + (ya+1) * 0.5f * (viewport>window.y1  > viewport>window.y0); > > float Pya =  Pza * Nza / Nya; > > if( Pya > L.y ) { > box>maxs.y = piMax( box>mins.y, ya ); > } > else { > box>mins.y = piMin( box>maxs.y, ya ); > } > } > > // Tangent b > if( Pzb < 0 ) { > float yb = 2 * Nzb / (Nyb * viewport>cam>height); > yb = viewport>window.y0 + (yb+1) * 0.5f * (viewport>window.y1  > viewport>window.y0); > > float Pyb =  Pzb * Nzb / Nyb; > > if( Pyb > L.y ) { > box>maxs.y = piMax( box>mins.y, yb ); > } > else { > box>mins.y = piMin( box>maxs.y, yb ); > } > } > } > > if( box>mins.y >= box>maxs.y ) { > return false; > } > > return true; > } > > > On Fri, 07 Jan 2005 12:08:18 0600, David Whatley <david@...> wrote: >> This may be simple, but somehow eluding us... >> >> What is the algorithm to compute a scissors rect for the area of >> influence of a light? We have a light: position and radius; and the >> camera info (with associated matrices already built). We want the >> screen space rect to clip against. Somehow we always seem to be a >> little off but can't figure out why. > > >  > Ignacio Castaño > castano@... > > >  > The SF.Net email is sponsored by: Beat the postholiday blues > Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. > It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Jay Stelly <Jay@va...>  20050108 06:23:14

Also, don't forget filtering. The pixel shader example only gets you = point sampling. You've still got to resample that texture map to the = projection under the pixel. Jay > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of Tom Forsyth > Sent: Friday, January 07, 2005 10:11 PM > To: gdalgorithmslist@... > Subject: RE: [Algorithms] traditional compression schemes >=20 > In short  random access. > ... >=20 > > From: Gabriel Peyr=E9 > > > > On recent hardware I believe a wavelet transform could be=20 > implemented=20 > > more or less easily using pixel shader, and designing new=20 > hardware to=20 > > perform such transform shouldn't be that hard. >=20 
From: Tom Forsyth <tom.forsyth@ee...>  20050108 06:11:08

In short  random access. Bear in mind that the whole point of texture compression is to reduce = the memory bandwidth cost of a single texel read. It is NOT to reduce the = amount of video memory taken up by a texture  that is a side benefit. And = hardware reads one texel at a time. So probably the worst thing to have to do is to decompress the whole = texture just to read one texel. Totally pointless  burns memory bandwidth like crazy. So you want to be able to look up texel (x,y) in a texture by = reading as little memory as possible. That's not something most texture compression schemes can do. That is = why things like MPEG and wavelet stuff hasn't been done in texturemapping hardware. The only compression algorithms implemented in hardware are = ones that can solve the above problem  "is I want to read texel X,Y, and probably a few of the nearby texels as well, how do I minimise the = amount of memory I need to read?" Having said that, there's a bunch of interesting ways this can be done, = but so far it's the subject of a lot of research, and that means no hardware vendor is going to commit to it any time soon. It's a little bit chickenandegg. TomF. > From: Gabriel Peyr=E9 >=20 > I am wondering what prevent hardware vendors=20 > or 3D library review boards to include classical image > compression schemes in their product. >=20 > On recent hardware I believe a wavelet transform > could be implemented more or less easily using > pixel shader, and designing new hardware > to perform such transform shouldn't be that hard. >=20 > The other question is : "what would be the benefit > for game developers of such new features?". >=20 > For nearly lossless compression (say 1:4) > DXT or 3Dc is cool, but for more aggressive=20 > rates (say 1:20) or for functions with really > sharp features or even discontinuities > (e.g. normal maps), it's impossible on > today hardware. >=20 > thanks a lot for your attention >=20 > Gabriel 
From: Tom Forsyth <tom.forsyth@ee...>  20050108 05:10:58

Anisotropic filtering is actually (a) pretty good and (b) works with = mipmaps very well. There's a bunch of different ways to do it, but here's a common way (not sure how hardware actually does it, but this is going to be fairly = similar). You take your screen pixel and you project it onto the texture. This is = some sort of foursided shape, though because of perspective and suchlike, = it's basically a mess  none of the sides are equal or parallel. First approximation is you remove the effect of different perspective on each side of the quad, so now you have a parallelogram  pairs of sides = are parallel, but not equal length. Second approximation is that you find the longest length inside this parallelogram. Most of the diagrams show a parallelogram with two long = sides and two short sides, and say that you pick the long sides, but this is misleading, because there is also the case of a squashed diamond shape  = all four sides the same length, but it's been squashed. Both these cases do = work (I believe), but the intuition above gives you the wrong idea. OK, so you have the longest length inside the shape, which is the axis = of anisotropy. You also find the width of the shape at rightangles to this axis. Now draw a rectangle around this shape, and that's a bit like the = area to be sampled. The way most hardware does this (I believe) is by selecting the mipmap = level so that the width (the shorter measurement) is one texel in size. Then = take multiple trilinear samples along the longer edge, each two texels apart. Take as many samples as needed to fill out the length. So obviously if = the two lengths are the same, and the shape was basically a square, the = hardware only needs one sample. If the shape was twice as wide as it was long, it needs two samples, and so on. Nominally, the mipmap level chosen is a fractional value, so that each sample is a trilinear sample. I suspect hardware does it slightly more efficiently here, since otherwise you're doing a lot of separate samples = in the lower mipmap level that you probably don't need to do. So it's not actually that bad. You are first approximating the real = shape with a parallelogram, then approximating that by a rectange, and then approximating that by a bunch of trilinear samples. But the end result = isn't all that far off the right one. Usually :) TomF. > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of Rowan Wyborn > Sent: 07 January 2005 13:42 > To: gdalgorithmslist@... > Subject: RE: [Algorithms] mip mapping is evil! >=20 >=20 >=20 > > However, there are still problems. If you imagine a four=20 > > texel 1D texture. What you'd like > > to have for the realtime display of that texture would be one=20 > > MIPmapped texel that summarises > > the results of filtering a region that's two texels across=20 > > centered between texels 0 and 1 and > > another centered between 1 and 2, another between 2 and 3 and=20 > > so on. However, you don't have that. > > You only have a single filtered RGB for the region between 0=20 > > and 1 and another between 2 and 3. > > So if the screen pixel covers two texels and is centered=20 > > between texels 1 and 2, the hardware > > is forced to do a linear interpolation between the 0/1 texel=20 > > and the 2/3 texel. This does indeed > > throw away some information that could have been preserved. >=20 > Yes, this was the issue i was referring too when i said=20 > 'throwing away detail'... this is what makes mipmapped pixels=20 > look much blurrier than they should be. >=20 > I think anisotropic filtering is probably the only practical=20 > solution, hardware is already somewhat capable of it (for a=20 > perf hit)... perhaps sometime in the future cards will no=20 > longer even support bilinear filtering, and fast anisotropic=20 > will become the standard :) >=20 > rowan >=20 >=20 >  > The SF.Net email is sponsored by: Beat the postholiday blues > Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. > It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Tom Forsyth <tom.forsyth@ee...>  20050108 04:33:49

You are probably computing the radius of the _circle_ at the distance = away of the centre of the sphere. That is not the widest part of the sphere! Because of perspective, the widest part of the sphere is slightly = closer. As the sphere approaches the camera, this effect gets even more pronounced. For example (ASCII art exxxxtreme!) C **A** ** / ** * / B * / / * * / / * * / / * */ / * A + * * / * * / * * / * * / * B * ** ** ***** So the middle of the sphere is at the + sign, and the camera is at C. If = you just find the radius at the distance of the centre of the sphere, you = are finding the two points B and finding that FOV. But what you really need = to find are points A  you can see that their FOV w.r.t the camera is much wider, even though they are _physically_ closer together than B. So anyway, what you really want in the 2D example above is to find the = two lines that pass through the camera and just touch the sphere, i.e. the = point of closest approach to the centre fo the sphere is equal to the radius = of the sphere. In 3D, you just need to do that twice  once for the two planes z =3D = Sx, and once for the two planes z =3D Ty. It's still a 2D problem in each case, = you just discard a different axis (y in the first, x in the second). TomF. > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of David Whatley > Sent: 07 January 2005 10:08 > To: 'gdalgorithmslist@...' > Subject: [Algorithms] Light Scissors Rect Algorithm >=20 >=20 > This may be simple, but somehow eluding us... >=20 > What is the algorithm to compute a scissors rect for the area of=20 > influence of a light? We have a light: position and radius; and the=20 > camera info (with associated matrices already built). We want the=20 > screen space rect to clip against. Somehow we always seem to be a=20 > little off but can't figure out why. >=20 >  David >=20 >=20 >=20 >  > The SF.Net email is sponsored by: Beat the postholiday blues > Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. > It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 
From: <gabriel.peyre@gm...>  20050108 03:10:14

I am wondering what prevent hardware vendors or 3D library review boards to include classical image compression schemes in their product. On recent hardware I believe a wavelet transform could be implemented more or less easily using pixel shader, and designing new hardware to perform such transform shouldn't be that hard. The other question is : "what would be the benefit for game developers of such new features?". For nearly lossless compression (say 1:4) DXT or 3Dc is cool, but for more aggressive rates (say 1:20) or for functions with really sharp features or even discontinuities (e.g. normal maps), it's impossible on today hardware. thanks a lot for your attention Gabriel 
From: Ben Garney <beng@ga...>  20050108 01:44:12

Sam McGrath wrote: > Gamespy is just for the master server and server browsing, as far as I > know. > > I would recommend rolling your own using good old sockets. Worked out > well for me, and it didn't take as much effort as you might imagine. > The real 'meat' of the code is going to have to be customized to your > game's needs anyway, and is not likely something that a prepackaged > middleware solution can provide. We took the networking code from Torque, packaged it up, and released it as TNL (www.opentnl.org). I'm admittedly biased, but the code is basically structured as a toolkit for building realtime networking. You can send events, move streams, updates, etc... Most of the stuff that's been discussed in this thread, including a lot of the really nasty, hardtogetreliablyworking bits. I would suggest reading over the docs and the code (it's dual licensed, open source and commercial, so you can get the code from the site) if you can spare the time. The code's been cleaned up a lot since then, but the techniques behind it were used in the Tribes games, as well as a more recent one, Zap! (www.zapthegame.com), as well as many other lesser known games. So, it represents quite a few years game development experience. If nothing else, it's a great way to see how the problem has been solved by others. :) This is also the same tech that Tim Gift mentioned earlier in the thread. Regards, Ben Garney Torque Technology Director 
From: Lucas Meijer <lucas@ma...>  20050108 01:40:49

Andrey Iones wrote: > I hope I am not terribly OT, given the amount of discussions on > networking these days. > > What kind of network transport middleware would you recommend for a > firstperson shooter? > Looks like DirectPlay is defunct and not an option anymore. Another > option is GameSpy. Also not sure how OT this is considered, but I quite like opentnl, by the garagagames guys. You can get it under a gpl lisence (or pay for a commercial one), so you can look at the code to see if you like it too. Bye, Lucas 
From: Brian Osman <osman@vv...>  20050108 01:28:53

Hi everybody, I have a problem which is probably very simple, but the math is eluding me today. And there is a good chance that I've made the problem much harder than it needs to be, due to bad design decisions made earlier. I'm working with a software clipper that I wrote (my first, which explains some of the shoddy design). It's necessary, because I'm on a platform that doesn't have any decent hardware clipping. First, how it "works": To avoid having to change much state when drawing my clipped triangles, I decided to do all of my work *before* projection, so all transformations (Object>World>View) contain no nonuniform scale, division, etc... The idea is that I can draw my clipped triangles using my original ModelView matrix, etc... To do this, I transform my verts into view space. From there, the camera planes form a nice frustum against which I clip. I'm doing simple planeatatime clipping, testing each edge along the perimeter of the inprogress polygon. However, my "work" verts are all in view space, so the only thing I compute is a set of weights (interpolants) for my original three verts. So if I'm working on triangle ABC, my work list is initially ((1,0,0),(0,1,0),(0,0,1))  each triple a set of weights relative to the three source verts. If I then realize that I need to clip segment AB at the halfway point, I end up with a new work vert: (0.5,0.5,0) ... and so on. When I'm done, I have a collection of these weights that define my clipped polygon, which I triangulate, and everything works great. I can use these weights (which were derived from my viewspace verts) to linearly interpolate my positions and UVs of my objectspace vertex data, and everything behaves like it should. The problem is color. On my platform (like many other platforms), color is not perspective correct. So when a clipped triangle is drawn next to a "normal" triangle, obvious differences in coloring become apparent. Effectively, my clipper computes perspective correct colors in the output, and I need to bias the results towards the verts that have larger Z values. I've done some simple math in 2dimensions (projecting onto 1D), using only two points, but I'm having a hard time getting the 1/Z ratios to do the right thing with my silly interpolant/weighting based clip verts. Especially because a final vertex (like a screen corner) can have nonzero weights for all three source verts  the clipper output can be something in the middle of the original triangle. Have I shot myself in the foot? I've looked around, but it seems like most other demonstrations of clipping aren't relevant  they're tied to software rasterization where the author has total control of the pipeline. Any pointers to something that explains a better way to approach the problem, or someone that can see the math easier than I, would be greatly appreciated. Thanks, Brian Osman Vicarious Visions Confidentiality Notice: This message, including any attachments, may contain confidential information. If you have received this message by mistake, please notify the sender immediately, delete all copies, and do not distribute or use this information. Thanks! 
From: Sam McGrath <sammy@s2...>  20050108 00:48:20

Gamespy is just for the master server and server browsing, as far as I know. I would recommend rolling your own using good old sockets. Worked out well for me, and it didn't take as much effort as you might imagine. The real 'meat' of the code is going to have to be customized to your game's needs anyway, and is not likely something that a prepackaged middleware solution can provide. Sam McGrath http://www.offsetsoftware.com  Original Message  From: "Andrey Iones" <iones@...> To: <gdalgorithmslist@...> Sent: Friday, January 07, 2005 4:12 PM Subject: [Algorithms] Network transport middleware >I hope I am not terribly OT, given the amount of discussions on networking >these days. > > What kind of network transport middleware would you recommend for a > firstperson shooter? > Looks like DirectPlay is defunct and not an option anymore. Another option > is GameSpy. > > What else? > > I need a solution for PC and Xbox. > > Thanks, > > Andrey. > > > >  > The SF.Net email is sponsored by: Beat the postholiday blues > Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. > It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Reed Mideke <rfm@co...>  20050108 00:29:03

Keith Jackson wrote: > First of all, I believe > trying to build a reliable scheme over an unreliable protocol that is > DESIGNED to be unreliable is foolhardy. Erm, what exactly do you think TCP is ? IP datagrams are not reliable. In other words, TCP over IP is exactly the thing you have described as foolhardy. As it turns out, TCP is not well suited to things that require low latency in the presence of packet loss. There seems to be a number people in this thread who have assumed that TCP is somehow special. It isn't. It is just *one way* of hiding the fact that the underlying transport is unreliable. It is unreasonable to assume it is the best way for every possible application. UDP, being essentially a slightly more user friendly interface to the underlying datagrams, is the obvious place to start building an alternate method. /goes back to lurking  Email: rfm(at)collectivecomputing.com or rfm(at)portalofevil.com 
From: Andrey Iones <iones@sa...>  20050108 00:11:45

I hope I am not terribly OT, given the amount of discussions on networking these days. What kind of network transport middleware would you recommend for a firstperson shooter? Looks like DirectPlay is defunct and not an option anymore. Another option is GameSpy. What else? I need a solution for PC and Xbox. Thanks, Andrey. 
From: PeterPike Sloan <ppsloan@wi...>  20050108 00:10:16

It's almost just the Y00 (DC) term  all of the other terms vanish when integrated over the sphere because of the orthogonality of the basis functions and the fact that one of the basis functions is a constant. It's actually not quite the DC term: int(Y00*Y00) =3D 1 We want int(DC*Y00), which is the same as 1/Y00*int(DC*Y00*Y00) =3D DC/Y00, Where Y00 =3D 1/sqrt(4*Pi) So the answer is DC*sqrt(4*Pi) PeterPike Sloan=20 Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Rowan Wyborn Sent: Friday, January 07, 2005 3:53 PM To: Gdalgorithms (Email) Subject: [Algorithms] integrating a single function projected into SH hullo, Given that the integration of the product of 2 funtions is the dot product of their SH coefficients, is the integration of a single function simply the sum of its coefficients? or does it need to be dotted with some kind of normalisation vector? thanks, rowan  The SF.Net email is sponsored by: Beat the postholiday blues Get a FREE limited edition SourceForge.net tshirt from ThinkGeek. It's fun and FREE  well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 