You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1
(5) 
2
(9) 
3
(1) 
4
(2) 
5

6
(3) 
7

8

9

10
(1) 
11
(3) 
12

13
(4) 
14

15

16
(2) 
17
(7) 
18
(3) 
19
(5) 
20
(1) 
21

22

23

24
(1) 
25
(7) 
26
(9) 
27
(3) 
28

29
(1) 
30
(10) 
31
(5) 




From: Jon Watte <hplus@mi...>  20050526 03:33:37

Think about how the image varies as the U/V coordinates vary outside the 0/1 range. With wrap, the signal is just a repeat of the other side, whereas with clamp, the signal "sticks" to whatever the edgemost pixel is. I'm assuming edge clamp here; with some hardware there are also fancy border texels/clamps to take into consideration. Thus, with this interpretation, then this is the right thing to do. You have to be using a filter better than the pedestrian box filter for this to matter, though  and your simple highorder filters will typically generate too much ringing to be all that useful. I've found a narrowish Gauss filter to be okay for MIP maps, but tastes vary. In brief: if your filter uses samples at point U,V as input, those samples should go through clamping/wrapping before looking up into the image data. You might also want to consider what should happen if the artists use the same texture with different wrap/clamp modes  duplicating the data seems excessive. There's also the issue of neighbor/border texels if you do texture sheeting, but if you start out with unsheeted source data, that's less of an issue. Cheers, / h+ Brad_Byrd@... wrote: > My question is, is this correct? I've never heard of anyone doing this > before, so it kind of caught me offguard. I don't know enough about 
From: Emanuele Salvucci <info@fo...>  20050526 02:34:24

 Original Message =20 From: <Brad_Byrd@...> To: <gdalgorithmslist@...> Sent: Thursday, May 26, 2005 4:01 AM Subject: [Algorithms] Wrap modes and MIP map generation. > I'm working on our texture pipeline, and my lead mentioned to me that = I=20 > should be sure and take the texture wrap mode (wrap vs. clamp) into=20 > account when generating MIP maps. In other words, when the mode is = set to=20 > clamp, I should clamp my filter on the edges of the map, but when the = mode=20 > is wrap, I should let the filter kernel wrap around and pick up pixels = on=20 > the opposite edge of the map. >=20 > My question is, is this correct? I've never heard of anyone doing = this=20 > before, so it kind of caught me offguard. I don't know enough about=20 > image/signal processing to know if it is more correct to do it this = way or=20 > to do it the "standard" way and just always clamp the filter = regardless of=20 > wrap mode. Is the visual difference even worth the slight increase in = > complexity? Uhmm...this sounds like your lead wants the option for textures made up = of many "charts". If you generate mips which are very small, and considering filtering, = you might end up picking a black pixel (background)...because not enough = chart border pixels were left in the highestres mip. But I'm not sure this is the best solution. Probably the best solution = would be to let the artist or "texture baker" software generate the = extra border pixels into the original texture. Usually 6/8 border pixels = for a 512x512 are enough for all the mips too. Best, Emanuele Salvucci MayaLightwave Game Technical Artist Lscript developer http://www.forwardgames.com emanueles@... "#3. He causes things to look different so it would appear time has = passed." =  
From: Brad_B<yrd@pl...>  20050526 01:59:52

I'm working on our texture pipeline, and my lead mentioned to me that I should be sure and take the texture wrap mode (wrap vs. clamp) into account when generating MIP maps. In other words, when the mode is set to clamp, I should clamp my filter on the edges of the map, but when the mode is wrap, I should let the filter kernel wrap around and pick up pixels on the opposite edge of the map. My question is, is this correct? I've never heard of anyone doing this before, so it kind of caught me offguard. I don't know enough about image/signal processing to know if it is more correct to do it this way or to do it the "standard" way and just always clamp the filter regardless of wrap mode. Is the visual difference even worth the slight increase in complexity? Thanks, Brad... 
From: Adrian Johnston <adrian3@gm...>  20050525 21:44:37

> I'm looking for a function which provides a reasonably efficient way of > detecting a frustumfrustum intersection. I'm not eager to use a basic > convex collision testing function, because of all the baggage coming with > these libraries. You can derive a 4x4 matrix with shear from one of the frustums that would project it to being a AABB at the origin with unit extents. If you transform the other frustum by that matrix you can the use a stripped down version of AABB vs frustum for your test. Consider looking at how often trivial reject/accepts using cones can be done. 
From: Jon Watte <hplus@mi...>  20050525 16:13:39

What we do is make sure that we simulate all the inputs for the AI exactly the same on client and server. As long as we can make sure that the numerical environment is the same, and all the inputs are the same, then they will come to the same decisions on server and client. There are, of course, lots of details, like how to deal with the case where packets get lost; how to deal with information that is only available on the server; how to enforce rules in code that ensures the simulations will coevolve and not diverge, etc. We've worked pretty hard for a long time to arrive at what we feel are the most suitable techniques for both underlying technology, and how we present this to the people actually writing the behaviors. Cheers, / h+ Rowan Wyborn wrote: > Hullo, > > I was wondering if anyone has any ideas or experience on replicating non player entity (AI) movement over a network, where the entities are moving around using root motion animation. It seems to me like a hard problem to solve, because suddenly you can't do any kind of predictions based purely on a linear movement velocity. > > thanks, > rowan > > >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit http://developer.yahoo.net/?fr=fadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 > > 
From: Tom Forsyth <tom.forsyth@ee...>  20050525 08:13:04

Well there's two related problems if you mispredict. The first is that the AI is not where the client thinks they are, and the second is that they're not doing the action that it thinks they are. So if the AI is doing a commandoroll, and the client doesn't get the message until quite late ... you're just hosed. Even if you predicted the root motion roughly right, they're not running, they're rolling. So your "head shot" still goes into thin air. The way I'd solve it is partly what Alen says (it's a 1D interpolation problem, just like always), and partly telling the AIs to warn all the clients ahead of time what they're going to do, and then having a very strong bias against changing their mind at the last second. Nobbling the AI so it can't react quite as fast is never a bad thing. Some would argue it's even pretty realistic :) TomF. > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...] On > Behalf Of Rowan Wyborn > Sent: 24 May 2005 17:13 > To: Gdalgorithms (Email) > Subject: [Algorithms] Network replication of root motion animation > > > Hullo, > > I was wondering if anyone has any ideas or experience on > replicating non player entity (AI) movement over a network, > where the entities are moving around using root motion > animation. It seems to me like a hard problem to solve, > because suddenly you can't do any kind of predictions based > purely on a linear movement velocity. > > thanks, > rowan 
From: Alen Ladavac <alenlml@cr...>  20050525 05:29:11

On some similar issue, we are replicating the animation offset for keyframed movement for things like doors, platforms, etc. You can then treat the animation offset as a 1D position and do interpolation over a few frames. We have an autoanimation system, though. I reckon some people rather use manually advancing frames, so don't know if that's feasible in such case. HTH, Alen  Original Message  From: "Rowan Wyborn" <rowan@...> To: <gdalgorithmslist@...> Sent: Wednesday, May 25, 2005 2:32 AM Subject: RE: [Algorithms] Network replication of root motion animation yeah thats what i was thinkin along the lines of, however i have one big concern. With linear motion if the server and client deviate, you can either snap or interpolate the client over a few frames to get back in sync with the server... this is generally not very noticeable to the eye even with quite large deviations. However because you are now replicating animation state, if the server/client deviate you have to snap animation state as well.... and thats something which i think would be very noticeable. I have no idea how bad it would look in practice thou... > Original Message > From: Paul Du Bois [mailto:paul.dubois@...] > Sent: Wednesday, 25 May 2005 12:14 PM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Network replication of root motion animation > > > It seems so simple that perhaps I don't understand the problem you > pose, but what's wrong with just replicating the AI's animation and > the fact that the AI is using root motion? Surely the AI's simulation > will evolve along the same path until it's disturbed. > > p > > >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit > http://developer.yahoo.net/?fr=fadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >  This SF.Net email is sponsored by Yahoo. Introducing Yahoo! Search Developer Network  Create apps using Yahoo! Search APIs Find out how you can build Yahoo! directly into your own Applications  visit http://developer.yahoo.net/?fr=fadysdnostgq22005 _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 
From: Rowan Wyborn <rowan@ir...>  20050525 02:30:08

yeah thats what i was thinkin along the lines of, however i have one big = concern. With linear motion if the server and client deviate, you can = either snap or interpolate the client over a few frames to get back in = sync with the server... this is generally not very noticeable to the eye = even with quite large deviations. However because you are now = replicating animation state, if the server/client deviate you have to = snap animation state as well.... and thats something which i think would = be very noticeable. I have no idea how bad it would look in practice = thou... > Original Message > From: Paul Du Bois [mailto:paul.dubois@...] > Sent: Wednesday, 25 May 2005 12:14 PM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Network replication of root motion animation >=20 >=20 > It seems so simple that perhaps I don't understand the problem you > pose, but what's wrong with just replicating the AI's animation and > the fact that the AI is using root motion? Surely the AI's simulation > will evolve along the same path until it's disturbed. >=20 > p >=20 >=20 >  > This SF.Net email is sponsored by Yahoo. > Introducing Yahoo! Search Developer Network  Create apps using Yahoo! > Search APIs Find out how you can build Yahoo! directly into your own > Applications  visit=20 > http://developer.yahoo.net/?fr=3Dfadysdnostgq22005 > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 
From: Paul Du Bois <paul.dubois@gm...>  20050525 02:13:59

It seems so simple that perhaps I don't understand the problem you pose, but what's wrong with just replicating the AI's animation and the fact that the AI is using root motion? Surely the AI's simulation will evolve along the same path until it's disturbed. p 
From: Rowan Wyborn <rowan@ir...>  20050525 00:10:24

Hullo, I was wondering if anyone has any ideas or experience on replicating non = player entity (AI) movement over a network, where the entities are = moving around using root motion animation. It seems to me like a hard = problem to solve, because suddenly you can't do any kind of predictions = based purely on a linear movement velocity. thanks, rowan 
From: Daniel Renkel <renkel@ce...>  20050524 18:36:05

i haven't received a mail since 4 days ... is this list that silent while e3 is going on? :) 
From: Sindharta Tanuwijaya <zaraasran@ya...>  20050520 13:25:13

Thank you very much for the responses. The links has given me many resources to learn, and from those links, I can measure my current physics knowledge. One of the links is so great because the steps to study advanced physics is also written. Thanks all. Sindharta  David Black <dblack@...> wrote: > Hi, > > >http://kwon3d.com/theory/jtorque/jtorque.html > > > >But I don`t quite understand how to get the "rate > of > >change in angular momentum" as is described in the > >last section. > > > (hope I am not missing something and it is more > complicated than it looks) > > You should probably look at some rigid body physics > tutorial for a > description of inertia tensors and skew symetric > matrices etc. > > http://www.d6.com/users/checker/dynamics.htm > > Has lots of info, I think his last GDMag article > covers working with > inertia tensors etc. But if not there are a number > of other places on > there which describe this. > > David > > >  > This SF.Net email is sponsored by Oracle Space > Sweepstakes > Want to be the first software developer in space? > Enter now for the Oracle Space Sweepstakes! > http://ads.osdn.com/?ad_id=7412&alloc_id=16344&op=click > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html 
From: PeterPike Sloan <ppsloan@wi...>  20050519 16:22:28

This is explicitly addressed in the paper alex sent out on another = thread I believe: =20 http://www.cs.duke.edu/researchers/artificial_intelligence/temp/eggert_ri= gid_body_transformations.pdf = <http://www.cs.duke.edu/researchers/artificial_intelligence/temp/eggert_r= igid_body_transformations.pdf>=20 It's not quite the covariance matrix (you have the outer product of the = corresponding points if I remember correctly  it's in the paper above.) PeterPike ________________________________ From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of = Christian Sch=FCler Sent: Thursday, May 19, 2005 2:26 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Finding optimal transformations =09 =09 There has been a thread on Flipcode long ago where a poster tried the = covariance matrix approach just to find out that it couldn't give him = consistent orientations for symmetric objects. It was PI or  PI = randomly. =20 http://www.flipcode.com/cgibin/fcmsg.cgi?thread_show=3D11022 Original Message From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of = PeterPike Sloan Sent: Thursday, May 19, 2005 2:14 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Finding optimal transformations =09 =09 This is along the lines of the ideas presented in the papers earlier = in this thread. In particular, if you compute the SVD of the covariance = matrix (or its eigenvectors  they are the same thing in this case), you = kind of ignore the diagonal terms (which is the "scaling" that exists in = the optimal 3x3 transform.) =20 Using the SVD it is easy to handle the degenerate case you mention as = well... =20 PeterPike Sloan =20 (The covariance matrix turns out to be p p^t that you use below. You = never need to build the matrix p, you can build the covariance matrix = directly though, it is simply the sum of the outer products of the = points minus the means...) =09 =09 ________________________________ From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill = Baxter Sent: Wednesday, May 18, 2005 4:24 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you = don't care whether it's SO(3), it's actually quite easy. =09 Put all the original points in a 3xN matrix p, and all the = corresponding target points in 3xN matrix q. And subtract the centroid = off both sets of points. =09 Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q =09 Except generally it's overconstrained, and p is nonsquare so you = can't invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 =09 In other words just use the pseudoinverse of p, and that actually = gives you the least squares solution. The nice thing is (p p^t) is just = a 3x3 matrix so it's easy to invert. Of course if (p p^t) is singular, = then you need a backup plan. That happens whenever all the p points are = collinear or coplanar, so it's not something you can generally ignore. I'm not sure what = the most appropriate backup plan is for those degenerate cases. Have to = think about it some more. =20 =09 bb =09 =09 =09 On 5/17/05, Bill Baxter <wbaxter@...> wrote:=20 Oh, ok, so it becomes a standard unconstrained nonlinear = optimization problem then. It sounded like you were saying the = objective itself was quadratic. I see now. =20 =09 So their main idea is just to take each Newton optimization step = using a local parameterization of the rotation (like R0 * = incrementalRotation(param[3])) rather than doing the whole optimization = with a fixed parameterization (like R(param[3]) ), where 'param[3]' = represents your favorite 3parameter representation of rotations. So = you could see the whole thing as not being so different from = optimization on SO(3) with Euler angles, except they avoid the = singularities by reparameterizing locally every step, and accumulating = the progress made thus far into the R0 matrix. Makes sense. Not as = spectactularly cool as it sounded intitially, though. =09 Coincidentally, I took a robotics course from the second author, and = just about the same time he was writing that paper, it appears. Small = world. =09 =09 =09 On 5/17/05, Willem de Boer < wdeboer@... = <mailto:wdeboer@...> > wrote:=20 No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =09 =09 
From: <c.schueler@ph...>  20050519 09:25:49

There has been a thread on Flipcode long ago where a poster tried the = covariance matrix approach just to find out that it couldn't give him = consistent orientations for symmetric objects. It was PI or  PI = randomly. =20 http://www.flipcode.com/cgibin/fcmsg.cgi?thread_show=3D11022 Original Message From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of = PeterPike Sloan Sent: Thursday, May 19, 2005 2:14 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Finding optimal transformations =09 =09 This is along the lines of the ideas presented in the papers earlier in = this thread. In particular, if you compute the SVD of the covariance = matrix (or its eigenvectors  they are the same thing in this case), you = kind of ignore the diagonal terms (which is the "scaling" that exists in = the optimal 3x3 transform.) =20 Using the SVD it is easy to handle the degenerate case you mention as = well... =20 PeterPike Sloan =20 (The covariance matrix turns out to be p p^t that you use below. You = never need to build the matrix p, you can build the covariance matrix = directly though, it is simply the sum of the outer products of the = points minus the means...) =09 =09 _____ =20 From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill = Baxter Sent: Wednesday, May 18, 2005 4:24 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you don't = care whether it's SO(3), it's actually quite easy. =09 Put all the original points in a 3xN matrix p, and all the = corresponding target points in 3xN matrix q. And subtract the centroid = off both sets of points. =09 Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q =09 Except generally it's overconstrained, and p is nonsquare so you = can't invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 =09 In other words just use the pseudoinverse of p, and that actually = gives you the least squares solution. The nice thing is (p p^t) is just = a 3x3 matrix so it's easy to invert. Of course if (p p^t) is singular, = then you need a backup plan. That happens whenever all the p points are = collinear or coplanar, so it's not something you can generally ignore. I'm not sure what the = most appropriate backup plan is for those degenerate cases. Have to = think about it some more. =20 =09 bb =09 =09 =09 On 5/17/05, Bill Baxter <wbaxter@...> wrote:=20 Oh, ok, so it becomes a standard unconstrained nonlinear optimization = problem then. It sounded like you were saying the objective itself was = quadratic. I see now. =20 =09 So their main idea is just to take each Newton optimization step = using a local parameterization of the rotation (like R0 * = incrementalRotation(param[3])) rather than doing the whole optimization = with a fixed parameterization (like R(param[3]) ), where 'param[3]' = represents your favorite 3parameter representation of rotations. So = you could see the whole thing as not being so different from = optimization on SO(3) with Euler angles, except they avoid the = singularities by reparameterizing locally every step, and accumulating = the progress made thus far into the R0 matrix. Makes sense. Not as = spectactularly cool as it sounded intitially, though. =09 Coincidentally, I took a robotics course from the second author, and = just about the same time he was writing that paper, it appears. Small = world. =09 =09 =09 On 5/17/05, Willem de Boer < wdeboer@... = <mailto:wdeboer@...> > wrote:=20 No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =09 =09 
From: Willem de Boer <wdeboer@pl...>  20050519 07:11:32

"I'm not sure what the most appropriate backup plan is for those degenerate=20 cases" =20 In those cases where (p p^t) turns out to be singular (ie., p did not have a full rowrank to begin with), you could then again find a leastsquares=20 solution using the pseudoinverse of (p p^t) itself. This is also exactly what=20 the inverse of a SVD of (p p^t) would give you; the two of 'em can be shown=20 to be equivalent. =20  Willem H. de Boer Homepage: http://www.whdeboer.com <http://www.whdeboer.com/>; =20 =20 
From: PeterPike Sloan <ppsloan@wi...>  20050519 00:14:16

This is along the lines of the ideas presented in the papers earlier in this thread. In particular, if you compute the SVD of the covariance matrix (or its eigenvectors  they are the same thing in this case), you kind of ignore the diagonal terms (which is the "scaling" that exists in the optimal 3x3 transform.) =20 Using the SVD it is easy to handle the degenerate case you mention as well... =20 PeterPike Sloan =20 (The covariance matrix turns out to be p p^t that you use below. You never need to build the matrix p, you can build the covariance matrix directly though, it is simply the sum of the outer products of the points minus the means...) ________________________________ From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill Baxter Sent: Wednesday, May 18, 2005 4:24 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you don't care whether it's SO(3), it's actually quite easy. =09 Put all the original points in a 3xN matrix p, and all the corresponding target points in 3xN matrix q. And subtract the centroid off both sets of points. =09 Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q =09 Except generally it's overconstrained, and p is nonsquare so you can't invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 =09 In other words just use the pseudoinverse of p, and that actually gives you the least squares solution. The nice thing is (p p^t) is just a 3x3 matrix so it's easy to invert. Of course if (p p^t) is singular, then you need a backup plan. That happens whenever all the p points are collinear or coplanar, so it's not something you can generally ignore. I'm not sure what the most appropriate backup plan is for those degenerate cases. Have to think about it some more. =20 =09 bb =09 =09 =09 On 5/17/05, Bill Baxter <wbaxter@...> wrote:=20 Oh, ok, so it becomes a standard unconstrained nonlinear optimization problem then. It sounded like you were saying the objective itself was quadratic. I see now. =20 =09 So their main idea is just to take each Newton optimization step using a local parameterization of the rotation (like R0 * incrementalRotation(param[3])) rather than doing the whole optimization with a fixed parameterization (like R(param[3]) ), where 'param[3]' represents your favorite 3parameter representation of rotations. So you could see the whole thing as not being so different from optimization on SO(3) with Euler angles, except they avoid the singularities by reparameterizing locally every step, and accumulating the progress made thus far into the R0 matrix. Makes sense. Not as spectactularly cool as it sounded intitially, though. =09 Coincidentally, I took a robotics course from the second author, and just about the same time he was writing that paper, it appears. Small world. =09 =09 =09 On 5/17/05, Willem de Boer < wdeboer@... <mailto:wdeboer@...> > wrote:=20 No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =09 =09 
From: Alex Mohr <amohr@cs...>  20050519 00:03:08

Also related, this paper at SIGGRAPH this year uses this sort of rigid shape matching to do some nice, stable deformation stuff: http://graphics.ethz.ch/~brunoh/s2005.html Alex >Just thought I'd throw this in the mix since no one mentioned it. >If you just need the optimal 3x3 _transformation_ period and you don't care >whether it's SO(3), it's actually quite easy. > >Put all the original points in a 3xN matrix p, and all the corresponding >target points in 3xN matrix q. And subtract the centroid off both sets of >points. > >Then you basically want to find the 3x3 matrix, T, that solves: >T p = q > >Except generally it's overconstrained, and p is nonsquare so you can't >invert it. But you can do this: >T p p^t = q p^t >T (p p^t)(p p^t)^1 = q p^t (p p^t)^1 >T = q p^t (p p^t)^1 > >In other words just use the pseudoinverse of p, and that actually gives you >the least squares solution. The nice thing is (p p^t) is just a 3x3 matrix >so it's easy to invert. Of course if (p p^t) is singular, then you need a >backup plan. That happens whenever all the p points are collinear or >coplanar, >so it's not something you can generally ignore. I'm not sure what the most >appropriate backup plan is for those degenerate cases. Have to think about >it some more. > >bb > > >On 5/17/05, Bill Baxter <wbaxter@...> wrote: >> >> Oh, ok, so it becomes a standard unconstrained nonlinear optimization >> problem then. It sounded like you were saying the objective itself was >> quadratic. I see now. >> >> So their main idea is just to take each Newton optimization step using a >> local parameterization of the rotation (like R0 * >> incrementalRotation(param[3])) rather than doing the whole optimization with >> a fixed parameterization (like R(param[3]) ), where 'param[3]' represents >> your favorite 3parameter representation of rotations. So you could see the >> whole thing as not being so different from optimization on SO(3) with Euler >> angles, except they avoid the singularities by reparameterizing locally >> every step, and accumulating the progress made thus far into the R0 matrix. >> Makes sense. Not as spectactularly cool as it sounded intitially, though. >> >> Coincidentally, I took a robotics course from the second author, and just >> about the same time he was writing that paper, it appears. Small world. >> >> >> On 5/17/05, Willem de Boer <wdeboer@...> wrote: >> > >> > No, you assume the objective function can be locally accurately >> > represented by a quadratic function (ie., the first 3 terms of its >> > Taylor series). Then you perform some sort of Newton step to >> > find the next best approximate point. >> > >> > >> 
From: Bill Baxter <wbaxter@gm...>  20050518 23:24:40

Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you don't care= =20 whether it's SO(3), it's actually quite easy. Put all the original points in a 3xN matrix p, and all the corresponding=20 target points in 3xN matrix q. And subtract the centroid off both sets of= =20 points. Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q Except generally it's overconstrained, and p is nonsquare so you can't=20 invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 In other words just use the pseudoinverse of p, and that actually gives you= =20 the least squares solution. The nice thing is (p p^t) is just a 3x3 matrix= =20 so it's easy to invert. Of course if (p p^t) is singular, then you need a= =20 backup plan. That happens whenever all the p points are collinear or=20 coplanar, so it's not something you can generally ignore. I'm not sure what the most= =20 appropriate backup plan is for those degenerate cases. Have to think about= =20 it some more.=20 bb On 5/17/05, Bill Baxter <wbaxter@...> wrote: >=20 > Oh, ok, so it becomes a standard unconstrained nonlinear optimization=20 > problem then. It sounded like you were saying the objective itself was=20 > quadratic. I see now.=20 >=20 > So their main idea is just to take each Newton optimization step using a= =20 > local parameterization of the rotation (like R0 *=20 > incrementalRotation(param[3])) rather than doing the whole optimization w= ith=20 > a fixed parameterization (like R(param[3]) ), where 'param[3]' represents= =20 > your favorite 3parameter representation of rotations. So you could see t= he=20 > whole thing as not being so different from optimization on SO(3) with Eul= er=20 > angles, except they avoid the singularities by reparameterizing locally= =20 > every step, and accumulating the progress made thus far into the R0 matri= x.=20 > Makes sense. Not as spectactularly cool as it sounded intitially, though. >=20 > Coincidentally, I took a robotics course from the second author, and just= =20 > about the same time he was writing that paper, it appears. Small world. >=20 >=20 > On 5/17/05, Willem de Boer <wdeboer@...> wrote: > >=20 > > No, you assume the objective function can be locally accurately=20 > > represented by a quadratic function (ie., the first 3 terms of its > > Taylor series). Then you perform some sort of Newton step to > > find the next best approximate point.=20 > >=20 > >=20 > 
From: Paul_F<irth@sc...>  20050518 11:08:52

gdalgorithmslistadmin@... wrote on 18/05/2005 11:51:48: > Hi, > > I'm sorry if this question has already been asked before, but the archives > seem to be offline. :( > > I'm looking for a function which provides a reasonably efficient way of > detecting a frustumfrustum intersection. I'm not eager to use a basic > convex collision testing function, because of all the baggage coming with > these libraries. > > I would only be interested in a true/false collision. No contact generation > etc is required. Any ideas? The totally accurate way to do it is to use either separating axis or minkowski difference (maths in the same) and use axes of frustum plane normals (pointing away from centre of frustum), and cross products of each edge against every other edge... Start with a sep axis routine for OBB vs OBB and add more axes (it will be optimised to assume each face has an exact opposite). Maybe someone else will be able to give a routine if you don't want totally accuracy (and associated slowness).... Cheers, Paul. ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify postmaster@... This footnote also confirms that this email message has been checked for all known viruses. ********************************************************************** Sony Computer Entertainment Europe 
From: Erwin de Vries <erwin@vo...>  20050518 10:51:48

Hi, I'm sorry if this question has already been asked before, but the archives seem to be offline. :( I'm looking for a function which provides a reasonably efficient way of detecting a frustumfrustum intersection. I'm not eager to use a basic convex collision testing function, because of all the baggage coming with these libraries. I would only be interested in a true/false collision. No contact generation etc is required. Any ideas? Thanks, Erwin 
From: Bill Baxter <wbaxter@gm...>  20050517 13:59:50

Oh, ok, so it becomes a standard unconstrained nonlinear optimization=20 problem then. It sounded like you were saying the objective itself was=20 quadratic. I see now.=20 So their main idea is just to take each Newton optimization step using a=20 local parameterization of the rotation (like R0 *=20 incrementalRotation(param[3])) rather than doing the whole optimization wit= h=20 a fixed parameterization (like R(param[3]) ), where 'param[3]' represents= =20 your favorite 3parameter representation of rotations. So you could see the= =20 whole thing as not being so different from optimization on SO(3) with Euler= =20 angles, except they avoid the singularities by reparameterizing locally=20 every step, and accumulating the progress made thus far into the R0 matrix.= =20 Makes sense. Not as spectactularly cool as it sounded intitially, though. Coincidentally, I took a robotics course from the second author, and just= =20 about the same time he was writing that paper, it appears. Small world. On 5/17/05, Willem de Boer <wdeboer@...> wrote: >=20 > No, you assume the objective function can be locally accurately=20 > represented by a quadratic function (ie., the first 3 terms of its > Taylor series). Then you perform some sort of Newton step to > find the next best approximate point.=20 >=20 > 
From: Willem de Boer <wdeboer@pl...>  20050517 13:05:03

No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =20 So, at each iteration you calculate the gradient and Hessian (which are given in closedform in the paper) at the current "best" point,=20 then base your next iteration point on that. =20 _____ =20 From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill Baxter Sent: Tuesday, May 17, 2005 2:36 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 On 5/17/05, Willem de Boer <wdeboer@...> wrote:=20 Hi Per, =09 "> > into one that can be solved by quadratic programming! Quadratic programming is pretty hardcore. Even special cases [...]" =09 Whoops, my bad. I didn't mean quadratic programming. I meant=20 the whole thing turns into optimising a quadratic function, with no constraints, and without having to introduce a degree of freedom for each added constraint. And you can find the optimum of an unconstrained quadratic function with just a single linear system solve, no? So that does sound pretty handy. =09 bb =09 
From: Bill Baxter <wbaxter@gm...>  20050517 12:36:29

On 5/17/05, Willem de Boer <wdeboer@...> wrote: >=20 > Hi Per, >=20 > "> > into one that can be solved by quadratic programming! > Quadratic programming is pretty hardcore. Even special cases > [...]" >=20 > Whoops, my bad. I didn't mean quadratic programming. I meant > the whole thing turns into optimising a quadratic function, > with no constraints, and without having to introduce a > degree of freedom for each added constraint. And you can find the optimum of an unconstrained quadratic function with=20 just a single linear system solve, no? So that does sound pretty handy. bb 
From: Willem de Boer <wdeboer@pl...>  20050517 11:57:51

Hi Per, "SO(n) is a nonsingular affine variety in R^(n^2) so you=20 can use Lagrange multipliers to solve optimization problems=20 on [...]" The cons of using Lagrange multipliers is that you are solving in a higher dimensional space, because you add a dimension for each constraint. Also, you will have to make sure the gradient of any of your constraints doesn't vanish at critical points. I guess this won't be such a big issue for most of the manifolds that we encounter in games. "> > into one that can be solved by quadratic programming! Quadratic programming is pretty hardcore. Even special cases [...]" Whoops, my bad. I didn't mean quadratic programming. I meant the whole thing turns into optimising a quadratic function, with no constraints, and without having to introduce a=20 degree of freedom for each added constraint. Anyway, this technique is just another tool in your=20 toolbox. It might fit your needs, it might not. I just=20 thought it was a nice paper, that's all. Cheers, Willem 
From: Marco Hjerpe <Marco.H<jerpe@di...>  20050517 07:43:16

Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of gdalgorithmslistrequest@... Sent: den 17 maj 2005 05:06 To: gdalgorithmslist@... Subject: GDAlgorithmslist digest, Vol 1 #2062  2 msgs Send GDAlgorithmslist mailing list submissions to gdalgorithmslist@... To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist or, via email, send a message with subject or body 'help' to gdalgorithmslistrequest@... You can reach the person managing the list at gdalgorithmslistadmin@... When replying, please edit your Subject line so it is more specific than "Re: Contents of GDAlgorithmslist digest..." Today's Topics: 1. Finding Joint Torque (Sindharta Tanuwijaya) 2. Re: Finding Joint Torque (David Black) ____ Message: 1 Date: Mon, 16 May 2005 06:01:43 0700 (PDT) From: Sindharta Tanuwijaya <zaraasran@...> To: gdalgorithmslist@... Subject: [Algorithms] Finding Joint Torque ReplyTo: gdalgorithmslist@... Hi, Thanks to Awen Limbourg who helped me so much the other day, and now I`ve come to the next step, which is finding the torque of each joints, provided that the orientation angles are known for all frames and all joints. I`ve tried to search for some resources in the internet, and one of the resources that I think is good is this: http://kwon3d.com/theory/jtorque/jtorque.html But I don`t quite understand how to get the "rate of change in angular momentum" as is described in the last section. Perhaps someone could help me ? Thank you very much. Sindharta T. =09 __________________________________=20 Yahoo! Mail Mobile=20 Take Yahoo! Mail with you! Check email on your mobile phone.=20 http://mobile.yahoo.com/learn/mail=20 ____ Message: 2 Date: Mon, 16 May 2005 19:29:23 +0100 From: David Black <dblack@...> To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding Joint Torque ReplyTo: gdalgorithmslist@... Hi, >http://kwon3d.com/theory/jtorque/jtorque.html > >But I don`t quite understand how to get the "rate of >change in angular momentum" as is described in the >last section. > (hope I am not missing something and it is more complicated than it looks) You should probably look at some rigid body physics tutorial for a=20 description of inertia tensors and skew symetric matrices etc. http://www.d6.com/users/checker/dynamics.htm Has lots of info, I think his last GDMag article covers working with=20 inertia tensors etc. But if not there are a number of other places on=20 there which describe this. David ____ _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist End of GDAlgorithmslist Digest 