You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 





1

2
(2) 
3
(12) 
4
(16) 
5
(43) 
6
(22) 
7
(35) 
8
(4) 
9
(5) 
10
(6) 
11
(2) 
12
(17) 
13
(5) 
14
(8) 
15
(5) 
16
(10) 
17
(11) 
18
(6) 
19
(14) 
20
(1) 
21

22
(2) 
23
(8) 
24
(2) 
25
(1) 
26
(4) 
27
(31) 
28
(28) 
29
(1) 
30
(4) 
31
(12) 
From: Andrey Iones <iones@sa...>  20040128 21:37:41

Gentlemen, Can anyone share the experience of grass field animation using vertex shaders? Thanks, Andrey.  Andrey Iones Project Leader PhD in Computer Science http://www.saber3d.com 
From: Greg Seegert <greg@st...>  20040128 20:42:13

I know you asked for an algorithm to merge cones, but for what you're trying to do, do you have access to any other information besides the cone for each object? I imagine it would be pretty easy to merge the bounding boxes of your objects, and form a cone from the light to the center of the bbox, with a theta of arcsin(bounding_radius/distance)? I'm sure I'm missing something here though  maybe this isn't tight enough of a fit. Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Tom Forsyth Sent: Tuesday, January 27, 2004 2:38 PM To: gdalgorithmslist@... Subject: [Algorithms] Bounding cones. I have a bunch of circularcrosssection cones in 3D space, all originating from a single point (assume it's the origin). They are defined by a unitlength vector V (the central axis) and a cos(theta) term, where theta is half the base angle. All points P on the cone obey the equation (P.V)/P=cos(theta). I've worked out the maths to take two cones and find the cone that tightly bounds both of them. So I can merge a bunch of cones by calling this on pairs of cones, but depending on the order you combine them you get a very loosefitting result, and the more cones you want to merge, the looser the result fits. So I'm wondering if anyone knows a good way to find the bounding cone of a whole bunch of cones. If the cone angle goes over 180 then it's fine to return "don't know", so it doesn't need to handle crazy cases. I'm using this to find frustums for shadowbuffers  each object is represented by a cone from the (point)light that bounds the mesh, and I want to find the frustum that bounds a collection of objects as tightly as possible. I have a feeling this is a similar problem to finding good bounding spheres, but I don't know how to solve that particularly well either. It's also almost identical to the 2D version  finding bounding circles  but not quite because the circles are the projections of the cones on the surface of the unit sphere  you can't simply project them to a flat plane and solve the problem there because (a) which plane do you use? and (b) they're not circles when you project them, they're ellipses. TomF.  The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 35 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Andrew Willmott <awillmott@ma...>  20040128 18:54:46

Just a thought: * Sum the Vs weighted by solid angle to give V'  take that as your projective plane direction. * Find the farthest point of each projected cone from the origin. Ostensibly you could work out the projected ellipse and work from there, but I bet there's a faster way. * Fit a circle to the points  your enclosing spotlight is in direction V' with theta coming from that circle. Projecting to a plane only starts to fail as the enclosing angle gets up towards 180, which is an acceptable failure case only. This isn't going to be optimal, and maybe the results won't be good enough, but, I can imagine it being reasonably fast, and it doesn't suffer from hysteresis. For better results you could do a bounding ellipse and work backwards to a spotlight that's not aligned with V'. I wonder if it's not faster to drop down a level and do direct frustum plane fitting to the bounding spheres of the objects? A. Tom Forsyth wrote: >I have a bunch of circularcrosssection cones in 3D space, all originating >from a single point (assume it's the origin). They are defined by a >unitlength vector V (the central axis) and a cos(theta) term, where theta >is half the base angle. All points P on the cone obey the equation >(P.V)/P=cos(theta). I've worked out the maths to take two cones and find >the cone that tightly bounds both of them. So I can merge a bunch of cones >by calling this on pairs of cones, but depending on the order you combine >them you get a very loosefitting result, and the more cones you want to >merge, the looser the result fits. > >So I'm wondering if anyone knows a good way to find the bounding cone of a >whole bunch of cones. If the cone angle goes over 180 then it's fine to >return "don't know", so it doesn't need to handle crazy cases. > >I'm using this to find frustums for shadowbuffers  each object is >represented by a cone from the (point)light that bounds the mesh, and I want >to find the frustum that bounds a collection of objects as tightly as >possible. > >I have a feeling this is a similar problem to finding good bounding spheres, >but I don't know how to solve that particularly well either. It's also >almost identical to the 2D version  finding bounding circles  but not >quite because the circles are the projections of the cones on the surface of >the unit sphere  you can't simply project them to a flat plane and solve >the problem there because (a) which plane do you use? and (b) they're not >circles when you project them, they're ellipses. > >TomF. > > > > > > > >The SF.Net email is sponsored by EclipseCon 2004 >Premiere Conference on Open Tools Development and Integration >See the breadth of Eclipse activity. February 35 in Anaheim, CA. >http://www.eclipsecon.org/osdn >_______________________________________________ >GDAlgorithmslist mailing list >GDAlgorithmslist@... >https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >Archives: >http://sourceforge.net/mailarchive/forum.php?forum_ida88 > > > 
From: Jon Watte <hplus@mi...>  20040128 18:04:35

You absolutely, positively have to use a profiler at this point. I've found crappy implementations of STL to sometimes suck up amazing percentages of CPU. Seeing as you don't run on a cell phone, another thing to post to make the question more answerable is: what CPU, memory speed, memory amount, and graphics card? (Actually, you took out rendering, so that's not as important). How big is each node? I recall that you tried CodeAnalyst, which means you're probably on Athlon. However, I recommend still using VTune (you can download the free trial) in timeerbased sampling mode; it can sample 1000 times a second, and show you where the time is going on a rough scale. Or try running the same app on an Intel CPU, and use real eventbased sampling. Then look at the places where you spend the time, and figure out whether you need to call those places at all. If you do, then figure out whether the problem is pipeline stalls, cache misses, page faults, ... Cheers, / h+ Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Robert Cannell Sent: Wednesday, January 28, 2004 1:41 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Quadtree speed ups Thank you all for your help so far. I feel silly for having such a problem of what I thought was a simple thing. Learn something new every day :P. > That's way out of bounds, it sounds to me like you are doing something > horribly wrong. If I were you I would log your quadtree traversal for > one frame; I bet you have some O(N^2) (or worse) iteration/recursion > hiding in your program. >If at each and every one of those leaves you go off and call some deep >render path or other routine that kicks the tree out of cache you've just >wasted all that effort in generating a cache friendly layout. I agree with these points also, I just need to find the exact place where it is happening :P. I will attempt to try and log a frame as Thatcher had pointed out. Basically this is all there is to my quadtree node recursion. function node>GetRenerableItems()  Get Child Iterator ( stl hash_map )  Loop through children  call childnode>GetRenderableItems() // If leaf  Get Geometry Iterator  loop through geometry and add to render queue list ( vector ) > For running on a cell phone, this is an amazing frame rate! What algorithm > did you use to get there? It is my own special crappyslow algorithm :). > If there is no change between 33x33 and 17x17, then the bottleneck isn't > there  there's no use trying 65x65 I did realise this, I have removed all my render calls, which didn't make a difference  I was just mentioning this because Ivan had mentioned that 17x17 vertices was too few to be using :). > Original Message > From: gdalgorithmslistadmin@... [mailto:gdalgorithms > listadmin@...] On Behalf Of Thatcher Ulrich > Sent: Wednesday, 28 January 2004 5:15 a.m. > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Quadtree speed ups > > On Jan 27, 2004 at 10:15 +1300, Robert Cannell wrote: > > > > A) Does this sound right for a 16x16 quadtree (4 levels, 256 patches > > when in full view, with 50fps)? Sometimes I think it doesn't, and > > sometimes I think that I have done something horribly wrong ;). > > That's way out of bounds, it sounds to me like you are doing something > horribly wrong. If I were you I would log your quadtree traversal for > one frame; I bet you have some O(N^2) (or worse) iteration/recursion > hiding in your program. > >  > Thatcher Ulrich > http://tulrich.com > > >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188  The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 35 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: <castanyo@ya...>  20040128 15:40:39

Megan Fox wrote: > It does look a bit better, but as I thought I said (or at least meant to) in > the original email, it really does seem like we should be getting MUCH > better results. I agree the painted normals aren't very good... I'm not > really sure it's possible to paint good normals anyways  but they do look > better than the soup the ATI tool is putting out. > > TomF, thanks for the input  if anyone happens to know what sort of data the > ATI normal mapper tool puts out (and so how to properly handle it), or a > better tool period, such would be appreciated. Take in mind that you should use the same tangent basis that the normalmapper uses in order to obtain good results. So, if your application generates a different tangent basis that may explain the bad results. You can also try with my normalmapper tool, it's a plugin for max4 or max5.1, you can grab it here: http://www.ludicon.com/castano/files/old/ I used the nvidia MeshMender to compute the tangent basis in that tool, so that it would be easy to reproduce them. The tool is free, but unsupported and discontinued.  Ignacio Castaño castanyo@... 
From: Willem de Boer <wdeboer@pl...>  20040128 12:40:54

> >You can take the structure of Welzl's algorithm (who took it from >Seidel's LP algorithm) and add the necessary operations for your case: > > 1. Test a cone for containment in another cone > 2. Construct a bounding cone for a set of 1 (trivial), 2, or 3 > cones. > >Notably the hull for 3 cones could be a challenge. I think as a reasonable approximation, you could take the average of the 3 Vi's, that make up the three cones {Ci}, and use that as the V for your containing cone. Then compute H (ie., cos(halfangleofV)) from that..?  Willem 
From: Gino van den Bergen <gvandenbergen@pl...>  20040128 11:57:00

> Original Message > From: Tom Forsyth [mailto:tom.forsyth@...] > Sent: Wednesday, January 28, 2004 12:07 > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Bounding cones. >=20 > [Stuff deleted] =20 > Which means that although Nick is (probably) right when he says that any > bounding cone is formed by at most three cones, adding another cone may > produce a new bounding cone that uses a _completely different three_. > Which > is a pain. > It is not as much a pain as it sems. This problem occurs also for bounding spheres. Welzl showed in http://citeseer.nj.nec.com/welzl91smallest.html that on average a randomized algorithm takes only linear time. That is, the number of "painful" moments is relatively small. =20 =20 > However, I suspect the number you actually need to track will be > reasonable > in practice  you should be able to throw most of them away (anything that > is inside the hull is definately never going to be part of the hull in > future). So this is certainly a plausable avenue of attack. You can take the structure of Welzl's algorithm (who took it from Seidel's LP algorithm) and add the necessary operations for your case: 1. Test a cone for containment in another cone 2. Construct a bounding cone for a set of 1 (trivial), 2, or 3 cones. Notably the hull for 3 cones could be a challenge. Gino van den Bergen http://www.dtecta.com =20 
From: Stefan Boberg <stefan.boberg@di...>  20040128 11:31:05

The old scene graph discussion! I think part of the problem is that people have different ideas of what a "scene graph" really is. Many think it's the uberstructure which is used to represent the entire world, some (including myself) have more general definitions. A problem with many generic scene graph implementations is that in practice, they simply perform worse than rendering systems with more hardwired behaviour. This is especially true for the PS2, where memory access is something you really want to avoid. "Pure" OO designs often perform like crud compared to more pragmatic approaches simply because with OO you tend to group data in ways that make sense from a purely structural perspective instead of a performance perspective. A prime example would be your reference to having a node per bone in a skeleton. This does not perform well at all and offers little, if any, benefit. You're much better off describing your skeleton hierarchy using an array of transformation matrices and an array of indices describing the relationships between these transformations. Another really bad idea is having hierarchical state propagation. For example, having some kind of nodes for controlling whether backface culling is on or off, which affects all nodes below it. This "feels" powerful but just ends up complicating and slowing the system down by having to maintain state stacks etc, and it's rarely useful in a reallife scenario. Well, I could go on forever actually but I have to go. This is more of a SWEngtype discussion anyway really. In the end I think this is all about being pragmatic and not getting carried away and overgeneralizing systems if it has little benefit. The problem is that often you won't know this until you tried both approaches ;) /Stefan Stefan Boberg Chief Technical Officer Digital Illusions CE AB 
From: Emanuele Salvucci <info@fo...>  20040128 11:09:52

I have tried ORB, but sincerely I prefer Microwave ( http://www.evasion3d.com ). It's everything but free...but it's far better. It bakes everything and uses projection ( customizable distance and projection modes )... and it's generally 20 times faster than standard Lightwave plugin. Best, Emanuele Salvucci.  Original Message  From: "Sebastien Kuntz" <sebastien.kuntz@...> To: <gdalgorithmslist@...> Sent: Wednesday, January 28, 2004 11:02 AM Subject: RE: [Algorithms] Normal mapping error  not sure if it's a generation error, or a rendering error > From: Megan Fox <shalinor@...> > > > TomF, thanks for the input  if anyone happens to know what sort of data > > the ATI normal mapper tool puts out (and so how to properly handle it), or > > a better tool period, such would be appreciated. > > I think ORB ( http://www.soclab.bth.se/practices/orb.html ) might be doing > what you want, but never tested it. > > ORB (Open Render Bump) is a free tool that automatically creates these > bump maps from a highpolygon and a lowpolygon model. ORB also generates > displacement maps and decal/diffuse maps. > > Has anyone tried it? >  > Sébastien Kuntz > Virtual Reality Lab > S.N.C.F French Railways > "You're not here, this isn't happening" > > >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Tom Forsyth <tom.forsyth@ee...>  20040128 11:07:25

Aha: "It can be shown that the central axis of such a cone has the following property: if one associates a point on the unit sphere with each face = normal in the cluster, and computes the 3D convex hull of this pointset, must = pass through the closest point from the origin to that convex hull." I guess that makes sense  the 2D version of the same thing "obviously works", but my brain doesn't find the 3D version as obvious. Not all that sure I want to try constructing the convex hull of circles = at runtime though  sounds expensive. ... Argh. I worked out a lovely O(n) algorithm where you only ever track at = most three cones that make up the bounding hull feature closest to the origin (either an edge or a plane) as you go along (similar to what Nick was saying). Problem is, that's not enough. You can never guarantee to throw away information and still get an optimal hull. Imagine a regular ngon of cones of the same radius (Ci). Then place = another cone (C0) quite a way off. So the bounding cone is now formed by C0 and = one or two of Ci  the ones furthest from C0. But depending where you put = C0, it's a different Ci. So until you have C0, you can never throw away any = Ci, no matter how big n is. Which means that although Nick is (probably) right when he says that any bounding cone is formed by at most three cones, adding another cone may produce a new bounding cone that uses a _completely different three_. = Which is a pain. However, I suspect the number you actually need to track will be = reasonable in practice  you should be able to throw most of them away (anything = that is inside the hull is definately never going to be part of the hull in future). So this is certainly a plausable avenue of attack. Got me some maths to do. TomF. > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of PeterPike Sloan > Sent: 28 January 2004 05:50 > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Bounding cones. >=20 >=20 > I haven't been following this thread that closely, but pedro=20 > sander did > a very nice job of computing optimnal sphere/cone hiearchies for his > Silhouette Clipping paper: >=20 > http://pedrosander.com/publications/ >=20 > (section 5 in the above paper) >=20 > PeterPike Sloan >=20 > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On Behalf Of > Jonathan Blow > Sent: Tuesday, January 27, 2004 7:27 PM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Bounding cones. >=20 > > However, you might have something there. If I simply took n points=20 > > around the edge of each cone, where n is about 36 (could=20 > even change=20 > > the number according to the size of the cone  typically I=20 > will have=20 > > cones of widely varying sizes, since the objects vary=20 > widely in size,=20 > > and are at very different distances to the light). Then=20 > cluster them,=20 > > and use that as a starting guess. As long as the initial guess is=20 > > fairly accurate, I think it's true to say you can then do the=20 > > oneatatime merging and still get a very good bounding cone at the > end. >=20 > The covariance stuff is really fast, once you have the initial values > inside the covariance matrices; you can just transform them=20 > and add them > after that. > And actually, you can compute the closedform covariance of a=20 > cone, and > evaluating that would probably be faster than using individual points. > So my original 1000 points idea was overzealous and unnecessary. >=20 > By default when using such closedform covariance, big cones would be > favored, but you can compensate for this just by scaling up=20 > the density > of the small cones. >=20 > I'm not saying it's the best solution for your problem, but it's worth > trying if yer gonna investigate various solutions. >=20 >=20 >=20 >  > The SF.Net email is sponsored by EclipseCon 2004 Premiere=20 > Conference on > Open Tools Development and Integration See the breadth of Eclipse > activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 >=20 >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Sebastien Kuntz <sebastien.kuntz@no...>  20040128 10:28:18

From: Megan Fox <shalinor@...> > TomF, thanks for the input  if anyone happens to know what sort of data > the ATI normal mapper tool puts out (and so how to properly handle it), or > a better tool period, such would be appreciated. I think ORB ( http://www.soclab.bth.se/practices/orb.html ) might be doing what you want, but never tested it. ORB (Open Render Bump) is a free tool that automatically creates these bump maps from a highpolygon and a lowpolygon model. ORB also generates displacement maps and decal/diffuse maps. Has anyone tried it?  Sébastien Kuntz Virtual Reality Lab S.N.C.F French Railways "You're not here, this isn't happening" 
From: Willem de Boer <wdeboer@pl...>  20040128 09:41:59

Actually, it might prove to be trickier than that. Hc as a=20 function of Vc is: Hc( Vc ) =3D max [ <Vi,Vc> + Hi ], for all 1<=3Di<=3DN And the inequality constraints also contain Hc, which is exactly the function we are trying to minimize. So, the problem is looking very problematic :) I do not (yet) know enough theory to work this all out; maybe a heuristic approach like Jonathan's is best at the moment..  Willem Original Message From: Willem de Boer=20 Sent: Wednesday, January 28, 2004 10:02 AM To: 'gdalgorithmslist@...' Subject: RE: [Algorithms] Bounding cones. Hello everyone, Very interesting problem, Tom. I think this belongs to that esoteric field called computational geometry; you should consult a book or two on the subject, it's got applications in all fields within computer graphics. Famous problems it tries to tackle include the optimal bounding volume problem. After my first coffe this morning, I decided you could pose your=20 problem mathematically, as follows: Definition 1: A cone C is defined as an axis V and the cosine of its halfangle, H.=20 Definition (theorem really) 2: Given two cones C1 and C2, WITH COMMON ORIGINS. C1 is contained=20 within C2 iff <V1,V2>+H1 <=3D H2. Where <,> denotes dot product. Tom Forsyth's Problem:=20 Given a collection {Ci} of cones with a common origin, axes {Vi} and cosinehalfangles {Hi}, find a cone C with axis Vc, such that Hc is minimised, and such that Ci is contained within C, for each i. So it seems to me, one could to treat this as a minimsation problem, try to find a Vc, so as to MINIMISE Hc, subject to N=20 inequality constraints of the form: <Vc,Vi>+Hi <=3D Hc.=20 So, maybe search for minimization, KuhnTucker conditions, Lagrange multipliers, that sort of stuff?  Willem 
From: Robert Cannell <emailme@xt...>  20040128 09:40:56

Thank you all for your help so far. I feel silly for having such a problem of what I thought was a simple thing. Learn something new every day :P. > That's way out of bounds, it sounds to me like you are doing something > horribly wrong. If I were you I would log your quadtree traversal for > one frame; I bet you have some O(N^2) (or worse) iteration/recursion > hiding in your program. >If at each and every one of those leaves you go off and call some deep >render path or other routine that kicks the tree out of cache you've just >wasted all that effort in generating a cache friendly layout. I agree with these points also, I just need to find the exact place where it is happening :P. I will attempt to try and log a frame as Thatcher had pointed out. Basically this is all there is to my quadtree node recursion. function node>GetRenerableItems()  Get Child Iterator ( stl hash_map )  Loop through children  call childnode>GetRenderableItems() // If leaf  Get Geometry Iterator  loop through geometry and add to render queue list ( vector ) > For running on a cell phone, this is an amazing frame rate! What algorithm > did you use to get there? It is my own special crappyslow algorithm :). > If there is no change between 33x33 and 17x17, then the bottleneck isn't > there  there's no use trying 65x65 I did realise this, I have removed all my render calls, which didn't make a difference  I was just mentioning this because Ivan had mentioned that 17x17 vertices was too few to be using :). > Original Message > From: gdalgorithmslistadmin@... [mailto:gdalgorithms > listadmin@...] On Behalf Of Thatcher Ulrich > Sent: Wednesday, 28 January 2004 5:15 a.m. > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Quadtree speed ups > > On Jan 27, 2004 at 10:15 +1300, Robert Cannell wrote: > > > > A) Does this sound right for a 16x16 quadtree (4 levels, 256 patches > > when in full view, with 50fps)? Sometimes I think it doesn't, and > > sometimes I think that I have done something horribly wrong ;). > > That's way out of bounds, it sounds to me like you are doing something > horribly wrong. If I were you I would log your quadtree traversal for > one frame; I bet you have some O(N^2) (or worse) iteration/recursion > hiding in your program. > >  > Thatcher Ulrich > http://tulrich.com > > >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Willem de Boer <wdeboer@pl...>  20040128 09:01:46

Hello everyone, Very interesting problem, Tom. I think this belongs to that esoteric field called computational geometry; you should consult a book or two on the subject, it's got applications in all fields within computer graphics. Famous problems it tries to tackle include the optimal bounding volume problem. After my first coffe this morning, I decided you could pose your=20 problem mathematically, as follows: Definition 1: A cone C is defined as an axis V and the cosine of its halfangle, H.=20 Definition (theorem really) 2: Given two cones C1 and C2, WITH COMMON ORIGINS. C1 is contained=20 within C2 iff <V1,V2>+H1 <=3D H2. Where <,> denotes dot product. Tom Forsyth's Problem:=20 Given a collection {Ci} of cones with a common origin, axes {Vi} and cosinehalfangles {Hi}, find a cone C with axis Vc, such that Hc is minimised, and such that Ci is contained within C, for each i. So it seems to me, one could to treat this as a minimsation problem, try to find a Vc, so as to MINIMISE Hc, subject to N=20 inequality constraints of the form: <Vc,Vi>+Hi <=3D Hc.=20 So, maybe search for minimization, KuhnTucker conditions, Lagrange multipliers, that sort of stuff?  Willem 
From: Lars Wilke <lars@ch...>  20040128 07:03:35

I'm not sure I understand why a linear quadtree is inappropriate here. Linear Quadtrees are very compact, and all of the quadtree operations (neighbour finding etc.) can be done in the linear domain (i.e. you don't need to change back to a regular quadtree). Lars Wilke Credo Interactive Inc. > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Robert Cannell > Sent: Tuesday, January 27, 2004 1:15 AM > To: gdalgorithmslist@... > Subject: [Algorithms] Quadtree speed ups > > > Hi all, I have a Quadtree structure that is attached to a miniscene > graph that I have. > > My problem is that it is just too slow. On a terrain of > 16x16 patches ( > each patch has 17x17 vertices ), I get a frame rate of around 50fps. > > Now, I did some investigation into this, because at first I > didn't think > it was my QT, so I removed all my render calls, and the frame rate > changed by about 5%. If I reduce the amount of recursion that my tree > does, I get a noticeable increase in frame rate. > > I read in Game Programming Gems 2 about Linear Quadtrees by Matt > Pritchard, and how he was having speed issues because he was > receiving a > lot of cache misses because his quadtree nodes were not adjacent in > memory. > > So my questions are: > > A) Does this sound right for a 16x16 quadtree (4 levels, 256 patches > when in full view, with 50fps)? Sometimes I think it doesn't, and > sometimes I think that I have done something horribly wrong ;). > > B) Has anyone else had this problem and if so, how did you go about > fixing it? > > Thanks, > Robert Cannell > > > 
From: Paul Du Bois <dubois@do...>  20040128 06:12:17

Of the 3 algorithms mentioned in Johnstone, covariance is the 2nd one suggested.. The 3rd (exact) algorithm uses voronoi regions. It's solving for the largest empty region in S(3) but of course because of the geometry that's pretty much the same thing as finding the smallest enclosing region. To paraphrase for your case, find the point in S(2) farthest away from any of your cones; the opposite point is the center of your bounding cone. This farthest point will be some vertex in the voronoi diagram. Either compute the full thing (tricky) or find all possible voronoi verts (intersections of bisectors) and check for distance from your cones. Since you have O(n^2) bisectors, that's O(n^4) intersections and O(n^5) circlecircle distance checks. Hm...maybe it's better in practice. If it's not... computing the full diagram sounds like a fun and interesting problem. p 
From: PeterPike Sloan <ppsloan@wi...>  20040128 05:50:24

I haven't been following this thread that closely, but pedro sander did a very nice job of computing optimnal sphere/cone hiearchies for his Silhouette Clipping paper: http://pedrosander.com/publications/ (section 5 in the above paper) PeterPike Sloan Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Jonathan Blow Sent: Tuesday, January 27, 2004 7:27 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Bounding cones. > However, you might have something there. If I simply took n points=20 > around the edge of each cone, where n is about 36 (could even change=20 > the number according to the size of the cone  typically I will have=20 > cones of widely varying sizes, since the objects vary widely in size,=20 > and are at very different distances to the light). Then cluster them,=20 > and use that as a starting guess. As long as the initial guess is=20 > fairly accurate, I think it's true to say you can then do the=20 > oneatatime merging and still get a very good bounding cone at the end. The covariance stuff is really fast, once you have the initial values inside the covariance matrices; you can just transform them and add them after that. And actually, you can compute the closedform covariance of a cone, and evaluating that would probably be faster than using individual points. So my original 1000 points idea was overzealous and unnecessary. By default when using such closedform covariance, big cones would be favored, but you can compensate for this just by scaling up the density of the small cones. I'm not saying it's the best solution for your problem, but it's worth trying if yer gonna investigate various solutions.  The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 35 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 
From: Megan Fox <shalinor@ci...>  20040128 05:48:32

It does look a bit better, but as I thought I said (or at least meant to) in the original email, it really does seem like we should be getting MUCH better results. I agree the painted normals aren't very good... I'm not really sure it's possible to paint good normals anyways  but they do look better than the soup the ATI tool is putting out. TomF, thanks for the input  if anyone happens to know what sort of data the ATI normal mapper tool puts out (and so how to properly handle it), or a better tool period, such would be appreciated. For the moment, the artist is working up a much higherres version to see if that clears the problem up. If that doesn't solve it, I may end up rewriting the normal map shader we're using entirely (it was one of the Neb2default ones)  maybe there's a bug in it, and it would be good experience anyways. Thanks for your help,  Megan Fox > All that said, your handpainted normal mapped mesh doesn't actually > look better than your vertexlit mesh (in the still screenshot). > Normal mapping > is expensive (both in terms of GPU resources and developer resources) > so you should only do it if you are really getting your effort's worth. > Clearly your artist isn't drawing very good normal maps (I mean... I see > a little bit of extra 3Dness around the edges of the eyes and lips... but > that's it). 
From: Jonathan Blow <jon@nu...>  20040128 03:26:57

> However, you might have something there. If I simply took n points around > the edge of each cone, where n is about 36 (could even change the number > according to the size of the cone  typically I will have cones of widely > varying sizes, since the objects vary widely in size, and are at very > different distances to the light). Then cluster them, and use that as a > starting guess. As long as the initial guess is fairly accurate, I think > it's true to say you can then do the oneatatime merging and still get a > very good bounding cone at the end. The covariance stuff is really fast, once you have the initial values inside the covariance matrices; you can just transform them and add them after that. And actually, you can compute the closedform covariance of a cone, and evaluating that would probably be faster than using individual points. So my original 1000 points idea was overzealous and unnecessary. By default when using such closedform covariance, big cones would be favored, but you can compensate for this just by scaling up the density of the small cones. I'm not saying it's the best solution for your problem, but it's worth trying if yer gonna investigate various solutions. 
From: Nick Carter <ncarter@nv...>  20040128 02:54:30

Here's another wildass idea, which assumes you have an efficient way to compute the minimum bounding sphere of a bunch of spheres. Map cones to spheres in the following manner: Intersect each cone in your collection with the unit sphere. This will give you a bunch of circles in R^3. Convert the circles into spheres, i.e. treat the circles as great circles of a sphere. Now: compute the minimum bounding sphere, S, of these spheres. Then: reintersect S with the unit sphere, the result will be a circle, which you can then treat as a cone. Anyone care to verify/disprove that the resulting cone will be a bounding cone? I don't think it's necessarily minimal.  nick carter > Original Message > From: Jonathan Blow [mailto:jon@...] > Sent: Tuesday, January 27, 2004 5:44 PM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Bounding cones. > > > My approach to this problem would just be to throw a > clustering algorithm at the cones and then take the cluster > centroid as the middle of your new cone. It'll give you > higher quality results than twoatatime. > > In unfriendly environments (nonLn, say), the standard > clustering algorithm becomes a sort of twoatatime scheme > anyway (kmeans clustering). As for what exactly to cluster, > you could try to map to a space where each cone axis gives > you a point, or you could just stay in R3 and generate m > random points within each cone and just throw those into the > clustering algorithm. > > Such a kmeans scheme will, I think, still give you better > results than the simple twoatatime expandocone solution, > because with kmeans the cluster center is allowed to adapt > to all the input data before you commit to increases in cone radius. > > Actually, since there's only one cluster and everything must > fit into the cluster, you might get away with something > simpler. Let's assume you just randomly generate a bunch of > sample points from each cone (like 1000 points  uhh, let's > hope this is a preprocess). Take the mean and the covariance > of all these points, then transform the covariance into the > space of the mean. [see *]. Then you take the ellipsoid > corresponding to that covariance blob, and pick the two minor > axes. These give you a reasonable approximation for the > cross section of the cone. Likewise, the major axis of the > ellipsoid gives you a good approximation for the major axis > of the cone. Given that major axis, you can then just fit > the cone from there. > > As with all these things, you may want to perturb the > solution some in order to try and improve the results (uhh, > let's hope this is a preprocess). > > > [*: My gdmag article "My Friend, The Covariance Body" says > how to do this. > http://numbernone.com/product/My%20Friend,%20the%20Covariance > %20Body/index.html > Jim Van Verth also has an appropriate article in Game > Programming Gems 4.] > > > >  Original Message  > From: "Tom Forsyth" <tom.forsyth@...> > To: <gdalgorithmslist@...> > Sent: Tuesday, January 27, 2004 7:05 PM > Subject: RE: [Algorithms] Bounding cones. > > > > Oooh  so close. Got me all excited. I eventually found a > paper that > > summarised (and even corrected!) the Sederberg & Meyers paper, but > > it's just the same one I found  it adds one cone at a time to an > > overall cone, and has the same properties. They admit it's > > suboptimal, but it's still a good earlyout culler. Except that's > > fine for their case  the difference between culling perfectly and > > what they have is an extra 1%  tiny. In my case, I have twice the > > pixels to render :( > > > > Ah well  good try though. Your citeseerfu is strong. Cheers. > > > > TomF. > > > > > Original Message > > > From: gdalgorithmslistadmin@... > > > [mailto:gdalgorithmslistadmin@...] On > > > Behalf Of Mark Duchaineau > > > Sent: 27 January 2004 23:32 > > > To: gdalgorithmslist@... > > > Subject: Re: [Algorithms] Bounding cones. > > > > > > > > > There is a nice result by Sederberg and Meyers, but it is old > > > and thus only > > > available in libraries: > > > > > http://citeseer.nj.nec.com/context/178634/0 > > > > This link gives a number of papers that reference it, so > perhaps some > > of them will give a useful summary. > > > > Cheers, > > > > Mark D. > > > > Tom Forsyth wrote: > > > I have a bunch of circularcrosssection cones in 3D space, all > > originating > > > from a single point (assume it's the origin). > > > > > > > > > > > >  > > The SF.Net email is sponsored by EclipseCon 2004 > > Premiere Conference on Open Tools Development and > Integration See the > > breadth of Eclipse activity. February 35 in Anaheim, CA. > > http://www.eclipsecon.org/osdn > > _______________________________________________ > > GDAlgorithmslist mailing list > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > > >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, > CA. http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > 
From: Tom Forsyth <tom.forsyth@ee...>  20040128 02:26:55

Heh  nope, not a preprocess. This is realtime. And it has to be = moderately swift, coz I'm doing it around eight times for every object lit by each light every frame. Then I have to actually render some stuff :) (I'll probably have to do it less frequently than this anyway  there's other overheads killing me, but...) However, you might have something there. If I simply took n points = around the edge of each cone, where n is about 36 (could even change the = number according to the size of the cone  typically I will have cones of = widely varying sizes, since the objects vary widely in size, and are at very different distances to the light). Then cluster them, and use that as a starting guess. As long as the initial guess is fairly accurate, I think it's true to say you can then do the oneatatime merging and still get = a very good bounding cone at the end. Since I'm going to then turn this into a square frustum for rendering, there's a limit to just how good this bounding cone needs to be  I'm = always going to be rendering (4PI)/PI=3D21% more pixels than the cone anyway. = It was the average 24x number of pixels I wasn't keen on! And Nick had a good idea. The algorithm certainly sounds plausable. I'll have to sit down with a bit of paper and scribble some geometry to see = if I can figure out how to directly find the bounding cone of three cones. = Two was easy because everything interesting happens along a single plane. Cheers all  good ideas. TomF. > Original Message > From: gdalgorithmslistadmin@...=20 > [mailto:gdalgorithmslistadmin@...] On=20 > Behalf Of Jonathan Blow > Sent: 28 January 2004 01:44 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Bounding cones. >=20 >=20 > My approach to this problem would just be to throw a=20 > clustering algorithm at > the cones and then take the cluster centroid as the middle of=20 > your new cone. > It'll give you higher quality results than twoatatime. =20 >=20 > In unfriendly environments (nonLn, say), the standard=20 > clustering algorithm becomes > a sort of twoatatime scheme anyway (kmeans clustering). =20 > As for what > exactly to cluster, you could try to map to a space where=20 > each cone axis > gives you a point, or you could just stay in R3 and generate=20 > m random points > within each cone and just throw those into the clustering algorithm. >=20 > Such a kmeans scheme will, I think, still give you better=20 > results than the > simple twoatatime expandocone solution, because with kmeans the > cluster center is allowed to adapt to all the input data=20 > before you commit > to increases in cone radius. >=20 > Actually, since there's only one cluster and everything must=20 > fit into the cluster, > you might get away with something simpler. Let's assume you just > randomly generate a bunch of sample points from each cone=20 > (like 1000 points  uhh, > let's hope this is a preprocess). Take the mean and the=20 > covariance of all > these points, then transform the covariance into the space of=20 > the mean. [see *]. > Then you take the ellipsoid corresponding to that covariance=20 > blob, and pick > the two minor axes. These give you a reasonable approximation for the > cross section of the cone. Likewise, the major axis of the=20 > ellipsoid gives > you a good approximation for the major axis of the cone. =20 > Given that major > axis, you can then just fit the cone from there. >=20 > As with all these things, you may want to perturb the=20 > solution some in order > to try and improve the results (uhh, let's hope this is a preprocess). >=20 >=20 > [*: My gdmag article "My Friend, The Covariance Body" says=20 > how to do this. > http://numbernone.com/product/My%20Friend,%20the%20Covariance > %20Body/index.html > Jim Van Verth also has an appropriate article in Game=20 > Programming Gems 4.] >=20 >=20 >=20 >  Original Message =20 > From: "Tom Forsyth" <tom.forsyth@...> > To: <gdalgorithmslist@...> > Sent: Tuesday, January 27, 2004 7:05 PM > Subject: RE: [Algorithms] Bounding cones. >=20 >=20 > > Oooh  so close. Got me all excited. I eventually found a paper that > > summarised (and even corrected!) the Sederberg & Meyers=20 > paper, but it's just > > the same one I found  it adds one cone at a time to an=20 > overall cone, and > > has the same properties. They admit it's suboptimal, but=20 > it's still a good > > earlyout culler. Except that's fine for their case  the=20 > difference between > > culling perfectly and what they have is an extra 1%  tiny.=20 > In my case, I > > have twice the pixels to render :( > >=20 > > Ah well  good try though. Your citeseerfu is strong. Cheers. > >=20 > > TomF. > >=20 > > > Original Message > > > From: gdalgorithmslistadmin@...=20 > > > [mailto:gdalgorithmslistadmin@...] On=20 > > > Behalf Of Mark Duchaineau > > > Sent: 27 January 2004 23:32 > > > To: gdalgorithmslist@... > > > Subject: Re: [Algorithms] Bounding cones. > > >=20 > > >=20 > > > There is a nice result by Sederberg and Meyers, but it is old=20 > > > and thus only > > > available in libraries: > > >=20 > > http://citeseer.nj.nec.com/context/178634/0 > >=20 > > This link gives a number of papers that reference it, so=20 > perhaps some > > of them will give a useful summary. > >=20 > > Cheers, > >=20 > > Mark D. > >=20 > > Tom Forsyth wrote: > > > I have a bunch of circularcrosssection cones in 3D space, all > > originating > > > from a single point (assume it's the origin). > >=20 > >=20 > >=20 > >=20 > >=20 > >  > > The SF.Net email is sponsored by EclipseCon 2004 > > Premiere Conference on Open Tools Development and Integration > > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > > http://www.eclipsecon.org/osdn > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 >=20 >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 
From: Jonathan Blow <jon@nu...>  20040128 01:43:41

My approach to this problem would just be to throw a clustering algorithm at the cones and then take the cluster centroid as the middle of your new cone. It'll give you higher quality results than twoatatime. In unfriendly environments (nonLn, say), the standard clustering algorithm becomes a sort of twoatatime scheme anyway (kmeans clustering). As for what exactly to cluster, you could try to map to a space where each cone axis gives you a point, or you could just stay in R3 and generate m random points within each cone and just throw those into the clustering algorithm. Such a kmeans scheme will, I think, still give you better results than the simple twoatatime expandocone solution, because with kmeans the cluster center is allowed to adapt to all the input data before you commit to increases in cone radius. Actually, since there's only one cluster and everything must fit into the cluster, you might get away with something simpler. Let's assume you just randomly generate a bunch of sample points from each cone (like 1000 points  uhh, let's hope this is a preprocess). Take the mean and the covariance of all these points, then transform the covariance into the space of the mean. [see *]. Then you take the ellipsoid corresponding to that covariance blob, and pick the two minor axes. These give you a reasonable approximation for the cross section of the cone. Likewise, the major axis of the ellipsoid gives you a good approximation for the major axis of the cone. Given that major axis, you can then just fit the cone from there. As with all these things, you may want to perturb the solution some in order to try and improve the results (uhh, let's hope this is a preprocess). [*: My gdmag article "My Friend, The Covariance Body" says how to do this. http://numbernone.com/product/My%20Friend,%20the%20Covariance%20Body/index.html Jim Van Verth also has an appropriate article in Game Programming Gems 4.]  Original Message  From: "Tom Forsyth" <tom.forsyth@...> To: <gdalgorithmslist@...> Sent: Tuesday, January 27, 2004 7:05 PM Subject: RE: [Algorithms] Bounding cones. > Oooh  so close. Got me all excited. I eventually found a paper that > summarised (and even corrected!) the Sederberg & Meyers paper, but it's just > the same one I found  it adds one cone at a time to an overall cone, and > has the same properties. They admit it's suboptimal, but it's still a good > earlyout culler. Except that's fine for their case  the difference between > culling perfectly and what they have is an extra 1%  tiny. In my case, I > have twice the pixels to render :( > > Ah well  good try though. Your citeseerfu is strong. Cheers. > > TomF. > > > Original Message > > From: gdalgorithmslistadmin@... > > [mailto:gdalgorithmslistadmin@...] On > > Behalf Of Mark Duchaineau > > Sent: 27 January 2004 23:32 > > To: gdalgorithmslist@... > > Subject: Re: [Algorithms] Bounding cones. > > > > > > There is a nice result by Sederberg and Meyers, but it is old > > and thus only > > available in libraries: > > > http://citeseer.nj.nec.com/context/178634/0 > > This link gives a number of papers that reference it, so perhaps some > of them will give a useful summary. > > Cheers, > > Mark D. > > Tom Forsyth wrote: > > I have a bunch of circularcrosssection cones in 3D space, all > originating > > from a single point (assume it's the origin). > > > > > >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Nick Carter <ncarter@nv...>  20040128 01:40:57

Tom, You have a way to compute the exact, minimal bounding cone for a set of 2 cones. Can you figure out a way to compute an exact, minimal bounding cone of 3 cones? I ask because the following seems to be true: that, given a arbitrary collection of 4 cones, one of the cones can be removed from the collection without changing the minimal bounding cone of the collection. Thus, (I postulate) from any collection C of cones, it suffices to select three "pivot cones" c1, c2, c3 from C, such that MinimumBoundingCone({c1,c2,c3}) is a cone that strictly contains every element of C. Or, conjectured another way, the minimum bounding cone of a coneset C is the cone of maximum size from the collection of minimum bounding cones formed by triplets from C. If my reasoning is correct, this should lead to a straightforward search: init: // * number of cones in C int n // * array of cones that we're trying to compute the MBC of array C // * triplet vector we're trying to compute value T=<C[0],C[0],C[0]> // * boolean valued function that returns true iff cone is completely // within boundingcone function Contains(boundingcone,cone) // * a fast, conevalued function that computes a minimum bounding cone of three cones // based on some algebraic definition. function MBC3(cone,cone,cone) loop: for k=0 to n { if (!Contains(MBC3(T[0],T[1],T[2]),C[k])) { // we want to insert C[k] into the triplet T if (Contains( MBC3(C[k], T[1], T[2]), T[0] )) T = <C[k],T[1],T[2]>; goto loop; // throw out T[0] else if (Contains( MBC3(C[k], T[0], T[2]), T[1] )) T = <C[k],T[0],T[2]>; goto loop; // throw out T[1] else if (Contains( MBC3(C[k], T[0], T[1]), T[2] )) T = <C[k],T[0],T[1]>; goto loop; // throw out T[2] else // numerical wonkiness, or else I'm fatally // wrong in my conjecture assert(0); } } // if we get here, MBC3(T) is a MBC of C. Thoughts?  nick carter > Original Message > From: Tom Forsyth [mailto:tom.forsyth@...] > Sent: Tuesday, January 27, 2004 4:05 PM > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Bounding cones. > > > Oooh  so close. Got me all excited. I eventually found a > paper that summarised (and even corrected!) the Sederberg & > Meyers paper, but it's just the same one I found  it adds > one cone at a time to an overall cone, and has the same > properties. They admit it's suboptimal, but it's still a > good earlyout culler. Except that's fine for their case  > the difference between culling perfectly and what they have > is an extra 1%  tiny. In my case, I have twice the pixels to > render :( > > Ah well  good try though. Your citeseerfu is strong. Cheers. > > TomF. > > > Original Message > > From: gdalgorithmslistadmin@... > > [mailto:gdalgorithmslistadmin@...] On > > Behalf Of Mark Duchaineau > > Sent: 27 January 2004 23:32 > > To: gdalgorithmslist@... > > Subject: Re: [Algorithms] Bounding cones. > > > > > > There is a nice result by Sederberg and Meyers, but it is old > > and thus only > > available in libraries: > > > http://citeseer.nj.nec.com/context/178634/0 > > This link gives a number of papers that reference it, so > perhaps some of them will give a useful summary. > > Cheers, > > Mark D. > > Tom Forsyth wrote: > > I have a bunch of circularcrosssection cones in 3D space, all > originating > > from a single point (assume it's the origin). > > > > > >  > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 35 in Anaheim, > CA. http://www.eclipsecon.org/osdn > _______________________________________________ > GDAlgorithmslist mailing list GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 > 
From: Johnson, James <James.J<ohnson@si...>  20040128 01:33:14

You're correct. I should have tested the obvious first. James Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Nick Carter Sent: Tuesday, January 27, 2004 4:45 PM To: 'gdalgorithmslist@...' Subject: RE: [Algorithms] Bounding cones. I don't think that this iterative technique will always generate the smallest cone  it doesn't seem to work in the case of 2D circles. Consider three small circles of equal radius whose centers are mutually equidistant (i.e., in the pattern of an equilateral triangle). Then the minimum bounding circle of any two of the small circles will not be fully contained in the minimum bounding circle of all three of the small circles. So there's no way to "build up" a minimum bounding circle in the iterative way that James suggests. This counterexample is easily extended from circles to cones: consider three cones, of small theta, whose centerlines, pairwise, are at equal angles to one another. I'm very curious about bounds on optimality of like iterative techniques, in either the 2Dcircle or 3Dcone (which is still 2 dof) cases. Does anyone know?  nick carter  Vivendi Universal Games <<http://www.vugames.com>>:=0D The information transmitted is intended only for the=0D person or entity to which it is addressed and may=0D contain confidential and/or privileged material of=0D Vivendi Universal Games which is for the exclusive=0D use of the individual designated above as the=0D recipient. Any review, retransmission, dissemination=0D or other use of, or taking of any action in reliance=0D upon, this information by persons or entities other=0D than the intended recipient is prohibited. If you=0D received this in error, please contact immediately=0D the sender by returning email and delete the=0D material from any computer. If you are not the=0D specified recipient, you are hereby notified that=0D all disclosure, reproduction, distribution or action taken on the basis of this message is prohibited. 
From: Tom Plunket <gamedev@fa...>  20040128 01:27:55

> The term "scene graph" traditionally refers to a single > hierarchical representation of the entire world, with the > _hierarchy_ representing all or nearly all state... My bad my uses have already culled out the things that I and those around me deem to be "insane," although at some level... :) > It's all rubbish. In practice there is no single tree that is > "the daddy". You have a multitude of structures that may be > deep trees (e.g. BSPs or octrees), or may be very flat (e.g. > mesh<>texture relationships). They are all independent of > each other, and all important for different facets of > rendering. Trying to munge all these things into one ubertree > called a scene graph is crazy talk. ...from an OO perspective, however, I don't see anything wrong with munging this stuff all together into one highlevel system. I mean, technically it's there anyway there is one executable and it manages its heap and it's all data at some level. Games are essentially databases with pretty front ends, so conceptually is there anything wrong with grouping things together logically and actually implementing it so it's efficient? > So yes, all renderers have graphs of the scene, but many > (probably most) do not have a single big graph. Sure, although on the outside it almost doesn't matter you look at the renderer and you give it what it needs. Middleware could be implemented such that you just throw "nodes" at it and it does the right thing internally. This isn't inherently good, bad, fast, or slow. It's just a conceptual issue. > It's an idea imposed on graphics by theoreticians in an > attempt to oversimplify a complex problem, and in doing > so make it even more tricky. I see the beauty in it though. What I also see is concerns in places different from where we have concerns. It doesn't necessarily make the theory wrong, just misapplied. I'm not saying that an uber scenegraph is necessarily correct; it all depends on what you're trying to do (of course). What I am saying is that there are also conceptual or logical layers, and a highlevel layer could well treat "everything as a node" while actually being an efficient pipeline. > When all you have is Computer Science algorithms, everything > looks like a tree :) ...and when all you love is Extreme Programming... :) tom! 