You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 



1
(12) 
2
(6) 
3
(5) 
4
(5) 
5
(11) 
6
(3) 
7
(2) 
8
(11) 
9
(35) 
10
(38) 
11
(41) 
12
(18) 
13
(11) 
14
(16) 
15
(7) 
16
(31) 
17
(18) 
18
(5) 
19
(1) 
20

21
(20) 
22
(15) 
23
(33) 
24
(18) 
25
(46) 
26
(19) 
27
(2) 
28
(33) 
29
(31) 
30
(29) 
31
(24) 


From: Nick Carter <ncarter@nv...>  20030715 20:37:14

Doug, Circlefitting in the quantized domain you describe (that is, points on a sphere, quantized to 8 bits per component) can perhaps be done by brute force. A great circle of the sphere can be defined by a normal vector perpendicular to the circle's plane  that's only 2 degrees of freedom, yielding on the order of 256x256 distinct circles worth considering. Certainly not efficient, but enough for prototyping, and maybe even good enough for offline texture compression. All you would need to find a bestfit circle would be a direct way to go from a given circle to a pallette (which you've already suggested, see my reservations below), and an appropriate error metric given a particular pallette. For the error metric, I'd suggest a summation of the inverse cosine of the dot products between each of the original and compressed normals  each term of the sum would then correspond to a distance along the surface of the unit sphere. Sumofsquaredistances might also give good results. I'm not certain I agree with the construction you describe for finding the best subarc of a given great circle. Suppose that there's one extremal normal, with the rest of the normals clustered nicely some distance away on another part of the unit sphere. Just taking the minimum arc that encompasses all the points would result in bias in the compression. Is there a reason to guarantee perfect compresson of these extremal points? You may want to consider taking an 'average' point, and using that as the center of your arc. Also, the greatest arc you can possibly describe with this scheme is an arc of 180 degrees  the renormalization of the interpolants is like projecting a chord (of the unit sphere) onto the unit sphere from the origin. How will you handle the case where the maximum arcdistance between two adjacent points on the circle is less than 180 degrees? For tangentspace bump mapping, you usually don't see z components less than zero, so you might be working in a hemispherical domain to begin with; but you may very will have to deal with this if the bump maps are intended for objectspace bump mapping.  nick carter Original Message From: Doug Rogers Sent: Monday, July 14, 2003 4:29 PM To: gdalgorithmslist@... Subject: [Algorithms] Normal map compression and fitting a circle to a set of 3d points I am exploring an implementation for compressing normal maps using DXT/S3TC. Given hardware decode 3 or 4 palette entries for each 16 pixel block. Assumptions: pixel shader will renormalize texels so, hardware reconstructs an arc on a circle during decompression (because the texels are renormalized) So I wanted an algorithm that does a fit of a circle centered on the origin and radius = 1 to a set of 3D points on a sphere (normals). Any ideas how to do this? After that, I could choose the palette endpoints (the most extreme normals on the arc) and find the closest palette entry for each normal. Doug 
From: Miles Macklin <miles@co...>  20030715 20:26:33

We set a maximum number of simultaneous lights per object then use a modifed inverse distance weighting on each light so that the combined weights always sum to 1. We find the closest n lights, and use that weight as the lights attenuation, i know its cheap and nasty but it gives nice smooth transitions ( we use it for our shadow volumes as well ) and looks pretty acceptable. Miles  Original Message  From: "Guillaume Provost" <Guillaume@...> To: <gdalgorithmslist@...> Sent: Wednesday, July 16, 2003 3:13 AM Subject: [Algorithms] [Rendering] Merging, approximating several dynamic lights > Greetings, > > I am, as I'm sure you all are, constrained in the number of dynamic > lights I can light a mesh with. I've implemented a 'degrading scheme' where > I degrade a light from perpixel to pervertex to a simple ambient cue in > order to manage performance (these are also scaled with respect to distance > of the lightsource and camera), but these transitions are not always > seemless. I was wondering whether some of you have tried merging > lightsources as an alternative to this, and how much success you've had at > getting rid of discontinuities in the lighting conditions. I currently deal > mostly with 'skewed' spotlights (pointellipsoid lights) which I use to > roughly approximate rectangular arealights, and omnidirectional lights. > > Guillaume. > http://www.celdamage.com > > > >  > This SF.Net email sponsored by: Parasoft > Error proof Web apps, automate testing & more. > Download & eval WebKing and get a free book. > http://www.parasoft.com/bulletproofapps1 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Bob Dowland <Bob.Dowland@bl...>  20030715 16:24:29

Thanks a lot John, just what the doctor ordered! The book looks like the = definitive answer but the [ODE] thread was also very interesting, I = found some other goodies by looking at some of the other entries in the = thread list, in particular, http://w3imagis.imag.fr/Membres/Francois.Faure/faure.html which could also prove to be of general interest  at least for anyone = whose generally interested :) > Original Message > From: John Bustard [mailto:John.Bustard@...] > Sent: 14 July 2003 17:06 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Iterative Solver >=20 >=20 > Hi Bob >=20 > I think the physics engine solves the LCP directly through an=20 > iterative > method. I haven't read enough to understand how it works but=20 > I think this > webbook covers it > http://ioe.engin.umich.edu/people/fac/books/murty/linear_compl > ementarity_web > book/ >=20 > Chapter 9 Iterative Methods for LCP's >=20 > Theres also a good discussion on the ODE mailing list about it at: >=20 > http://www.q12.org/pipermail/ode/2002October/006147.html >=20 > Hope it helps, >=20 > John Bustard >=20 >  Original Message  > From: "Bob Dowland" <Bob.Dowland@...> > To: <gdalgorithmslist@...> > Sent: Friday, July 11, 2003 12:56 PM > Subject: RE: [Algorithms] Iterative Solver >=20 >=20 > <snip> >=20 > was what I was thinking too... except Barraff is solving Af +=20 > b =3D a, subject > to f_i, a_i >=3D 0 and f_i.a_i =3D 0 which is not as such a=20 > linear system but an > LCP. Involving some (M*x=3Dc)'ing (using Barraff's method) but the = "main > event" is the balancing of a_i's against f_i's. >=20 > Do we think that the "Iterator"s are reformulating the=20 > problem by somehow > rolling in the constraints to form what can subsequently be=20 > solved as a > linear system? >=20 > <snip> >=20 >=20 >=20 >=20 >  > This SF.Net email sponsored by: Parasoft > Error proof Web apps, automate testing & more. > Download & eval WebKing and get a free book. > http://www.parasoft.com/bulletproofapps1 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 
From: Guillaume Provost <Guillaume@ps...>  20030715 15:14:13

Greetings, I am, as I'm sure you all are, constrained in the number of dynamic lights I can light a mesh with. I've implemented a 'degrading scheme' where I degrade a light from perpixel to pervertex to a simple ambient cue in order to manage performance (these are also scaled with respect to distance of the lightsource and camera), but these transitions are not always seemless. I was wondering whether some of you have tried merging lightsources as an alternative to this, and how much success you've had at getting rid of discontinuities in the lighting conditions. I currently deal mostly with 'skewed' spotlights (pointellipsoid lights) which I use to roughly approximate rectangular arealights, and omnidirectional lights. Guillaume. http://www.celdamage.com 
From: metanet metanet <metanet_gda@ya...>  20030715 14:48:58

hey, just wanted to point out  the tokamak physics engine features (according to the docs) a "unique iterative method for solving constraints"  one of the tokamak guys is on this list.. maybe they could shed some light? raigan John Bustard <John.Bustard@...> wrote: Hi Bob I think the physics engine solves the LCP directly through an iterative method. I haven't read enough to understand how it works but I think this webbook covers it http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_web book/ Chapter 9 Iterative Methods for LCP's Theres also a good discussion on the ODE mailing list about it at: http://www.q12.org/pipermail/ode/2002October/006147.html Hope it helps, John Bustard  Original Message  From: "Bob Dowland" To: Sent: Friday, July 11, 2003 12:56 PM Subject: RE: [Algorithms] Iterative Solver was what I was thinking too... except Barraff is solving Af + b = a, subject to f_i, a_i >= 0 and f_i.a_i = 0 which is not as such a linear system but an LCP. Involving some (M*x=c)'ing (using Barraff's method) but the "main event" is the balancing of a_i's against f_i's. Do we think that the "Iterator"s are reformulating the problem by somehow rolling in the constraints to form what can subsequently be solved as a linear system?  This SF.Net email sponsored by: Parasoft Error proof Web apps, automate testing & more. Download & eval WebKing and get a free book. http://www.parasoft.com/bulletproofapps1 _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188  Post your free ad now! Yahoo! Canada Personals 
From: Ian McMeans <imcmeans@te...>  20030715 06:49:31

If you only tested to see if there was a solution to the LP problem, it wouldn't tell you if the intersection plane sliced through the brush, which is what we want to know. You can get solutions to the linear programming problem when the plane doesn't even come close to the brush  as long as at least some of the brush is on the front side of the plane, there is a solution the LP problem. But what we want is intersection  so we do two LP tests, one with the intersection plane's normal flipped. We then have two LP problems  if each of them has a solution, it means both planes have a chunk of the brush on their side of the halfspace, which means the plane intersects the brush. 8) Aaron Drew said: >My apologies if this is trivial but how exactly can two backtoback planes be >used to solve this with linear programming? The post doesn't seem to go into >any detail. 
From: Aaron Drew <ripper@in...>  20030715 02:54:40

My apologies if this is trivial but how exactly can two backtoback planes be used to solve this with linear programming? The post doesn't seem to go into any detail. On Sun, 13 Jul 2003 05:24 pm, Ian McMeans wrote: > I think a pretty good solution was posted by Per Vognsen here: > http://www.flipcode.com/cgibin/msg.cgi?showThread=00008625&forum=3dtheory&; >id=1 > > You started that thread, in fact. Is there a reason you don't want to use > his linear programming method, or his makeapolygon method? Maybe you > could elaborate on what you're looking for... 