You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 

1
(5) 
2
(9) 
3
(1) 
4
(2) 
5

6
(3) 
7

8

9

10
(1) 
11
(3) 
12

13
(4) 
14

15

16
(2) 
17
(7) 
18
(3) 
19
(5) 
20
(1) 
21

22

23

24
(1) 
25
(7) 
26
(9) 
27
(3) 
28

29
(1) 
30
(10) 
31
(5) 




From: PeterPike Sloan <ppsloan@wi...>  20050519 16:22:28

This is explicitly addressed in the paper alex sent out on another = thread I believe: =20 http://www.cs.duke.edu/researchers/artificial_intelligence/temp/eggert_ri= gid_body_transformations.pdf = <http://www.cs.duke.edu/researchers/artificial_intelligence/temp/eggert_r= igid_body_transformations.pdf>=20 It's not quite the covariance matrix (you have the outer product of the = corresponding points if I remember correctly  it's in the paper above.) PeterPike ________________________________ From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of = Christian Sch=FCler Sent: Thursday, May 19, 2005 2:26 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Finding optimal transformations =09 =09 There has been a thread on Flipcode long ago where a poster tried the = covariance matrix approach just to find out that it couldn't give him = consistent orientations for symmetric objects. It was PI or  PI = randomly. =20 http://www.flipcode.com/cgibin/fcmsg.cgi?thread_show=3D11022 Original Message From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of = PeterPike Sloan Sent: Thursday, May 19, 2005 2:14 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Finding optimal transformations =09 =09 This is along the lines of the ideas presented in the papers earlier = in this thread. In particular, if you compute the SVD of the covariance = matrix (or its eigenvectors  they are the same thing in this case), you = kind of ignore the diagonal terms (which is the "scaling" that exists in = the optimal 3x3 transform.) =20 Using the SVD it is easy to handle the degenerate case you mention as = well... =20 PeterPike Sloan =20 (The covariance matrix turns out to be p p^t that you use below. You = never need to build the matrix p, you can build the covariance matrix = directly though, it is simply the sum of the outer products of the = points minus the means...) =09 =09 ________________________________ From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill = Baxter Sent: Wednesday, May 18, 2005 4:24 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you = don't care whether it's SO(3), it's actually quite easy. =09 Put all the original points in a 3xN matrix p, and all the = corresponding target points in 3xN matrix q. And subtract the centroid = off both sets of points. =09 Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q =09 Except generally it's overconstrained, and p is nonsquare so you = can't invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 =09 In other words just use the pseudoinverse of p, and that actually = gives you the least squares solution. The nice thing is (p p^t) is just = a 3x3 matrix so it's easy to invert. Of course if (p p^t) is singular, = then you need a backup plan. That happens whenever all the p points are = collinear or coplanar, so it's not something you can generally ignore. I'm not sure what = the most appropriate backup plan is for those degenerate cases. Have to = think about it some more. =20 =09 bb =09 =09 =09 On 5/17/05, Bill Baxter <wbaxter@...> wrote:=20 Oh, ok, so it becomes a standard unconstrained nonlinear = optimization problem then. It sounded like you were saying the = objective itself was quadratic. I see now. =20 =09 So their main idea is just to take each Newton optimization step = using a local parameterization of the rotation (like R0 * = incrementalRotation(param[3])) rather than doing the whole optimization = with a fixed parameterization (like R(param[3]) ), where 'param[3]' = represents your favorite 3parameter representation of rotations. So = you could see the whole thing as not being so different from = optimization on SO(3) with Euler angles, except they avoid the = singularities by reparameterizing locally every step, and accumulating = the progress made thus far into the R0 matrix. Makes sense. Not as = spectactularly cool as it sounded intitially, though. =09 Coincidentally, I took a robotics course from the second author, and = just about the same time he was writing that paper, it appears. Small = world. =09 =09 =09 On 5/17/05, Willem de Boer < wdeboer@... = <mailto:wdeboer@...> > wrote:=20 No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =09 =09 
From: <c.schueler@ph...>  20050519 09:25:49

There has been a thread on Flipcode long ago where a poster tried the = covariance matrix approach just to find out that it couldn't give him = consistent orientations for symmetric objects. It was PI or  PI = randomly. =20 http://www.flipcode.com/cgibin/fcmsg.cgi?thread_show=3D11022 Original Message From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of = PeterPike Sloan Sent: Thursday, May 19, 2005 2:14 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] Finding optimal transformations =09 =09 This is along the lines of the ideas presented in the papers earlier in = this thread. In particular, if you compute the SVD of the covariance = matrix (or its eigenvectors  they are the same thing in this case), you = kind of ignore the diagonal terms (which is the "scaling" that exists in = the optimal 3x3 transform.) =20 Using the SVD it is easy to handle the degenerate case you mention as = well... =20 PeterPike Sloan =20 (The covariance matrix turns out to be p p^t that you use below. You = never need to build the matrix p, you can build the covariance matrix = directly though, it is simply the sum of the outer products of the = points minus the means...) =09 =09 _____ =20 From: gdalgorithmslistadmin@... = [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill = Baxter Sent: Wednesday, May 18, 2005 4:24 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you don't = care whether it's SO(3), it's actually quite easy. =09 Put all the original points in a 3xN matrix p, and all the = corresponding target points in 3xN matrix q. And subtract the centroid = off both sets of points. =09 Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q =09 Except generally it's overconstrained, and p is nonsquare so you = can't invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 =09 In other words just use the pseudoinverse of p, and that actually = gives you the least squares solution. The nice thing is (p p^t) is just = a 3x3 matrix so it's easy to invert. Of course if (p p^t) is singular, = then you need a backup plan. That happens whenever all the p points are = collinear or coplanar, so it's not something you can generally ignore. I'm not sure what the = most appropriate backup plan is for those degenerate cases. Have to = think about it some more. =20 =09 bb =09 =09 =09 On 5/17/05, Bill Baxter <wbaxter@...> wrote:=20 Oh, ok, so it becomes a standard unconstrained nonlinear optimization = problem then. It sounded like you were saying the objective itself was = quadratic. I see now. =20 =09 So their main idea is just to take each Newton optimization step = using a local parameterization of the rotation (like R0 * = incrementalRotation(param[3])) rather than doing the whole optimization = with a fixed parameterization (like R(param[3]) ), where 'param[3]' = represents your favorite 3parameter representation of rotations. So = you could see the whole thing as not being so different from = optimization on SO(3) with Euler angles, except they avoid the = singularities by reparameterizing locally every step, and accumulating = the progress made thus far into the R0 matrix. Makes sense. Not as = spectactularly cool as it sounded intitially, though. =09 Coincidentally, I took a robotics course from the second author, and = just about the same time he was writing that paper, it appears. Small = world. =09 =09 =09 On 5/17/05, Willem de Boer < wdeboer@... = <mailto:wdeboer@...> > wrote:=20 No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =09 =09 
From: Willem de Boer <wdeboer@pl...>  20050519 07:11:32

"I'm not sure what the most appropriate backup plan is for those degenerate=20 cases" =20 In those cases where (p p^t) turns out to be singular (ie., p did not have a full rowrank to begin with), you could then again find a leastsquares=20 solution using the pseudoinverse of (p p^t) itself. This is also exactly what=20 the inverse of a SVD of (p p^t) would give you; the two of 'em can be shown=20 to be equivalent. =20  Willem H. de Boer Homepage: http://www.whdeboer.com <http://www.whdeboer.com/>; =20 =20 
From: PeterPike Sloan <ppsloan@wi...>  20050519 00:14:16

This is along the lines of the ideas presented in the papers earlier in this thread. In particular, if you compute the SVD of the covariance matrix (or its eigenvectors  they are the same thing in this case), you kind of ignore the diagonal terms (which is the "scaling" that exists in the optimal 3x3 transform.) =20 Using the SVD it is easy to handle the degenerate case you mention as well... =20 PeterPike Sloan =20 (The covariance matrix turns out to be p p^t that you use below. You never need to build the matrix p, you can build the covariance matrix directly though, it is simply the sum of the outer products of the points minus the means...) ________________________________ From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Bill Baxter Sent: Wednesday, May 18, 2005 4:24 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Finding optimal transformations =09 =09 Just thought I'd throw this in the mix since no one mentioned it. If you just need the optimal 3x3 _transformation_ period and you don't care whether it's SO(3), it's actually quite easy. =09 Put all the original points in a 3xN matrix p, and all the corresponding target points in 3xN matrix q. And subtract the centroid off both sets of points. =09 Then you basically want to find the 3x3 matrix, T, that solves: T p =3D q =09 Except generally it's overconstrained, and p is nonsquare so you can't invert it. But you can do this: T p p^t =3D q p^t T (p p^t)(p p^t)^1 =3D q p^t (p p^t)^1 T =3D q p^t (p p^t)^1 =09 In other words just use the pseudoinverse of p, and that actually gives you the least squares solution. The nice thing is (p p^t) is just a 3x3 matrix so it's easy to invert. Of course if (p p^t) is singular, then you need a backup plan. That happens whenever all the p points are collinear or coplanar, so it's not something you can generally ignore. I'm not sure what the most appropriate backup plan is for those degenerate cases. Have to think about it some more. =20 =09 bb =09 =09 =09 On 5/17/05, Bill Baxter <wbaxter@...> wrote:=20 Oh, ok, so it becomes a standard unconstrained nonlinear optimization problem then. It sounded like you were saying the objective itself was quadratic. I see now. =20 =09 So their main idea is just to take each Newton optimization step using a local parameterization of the rotation (like R0 * incrementalRotation(param[3])) rather than doing the whole optimization with a fixed parameterization (like R(param[3]) ), where 'param[3]' represents your favorite 3parameter representation of rotations. So you could see the whole thing as not being so different from optimization on SO(3) with Euler angles, except they avoid the singularities by reparameterizing locally every step, and accumulating the progress made thus far into the R0 matrix. Makes sense. Not as spectactularly cool as it sounded intitially, though. =09 Coincidentally, I took a robotics course from the second author, and just about the same time he was writing that paper, it appears. Small world. =09 =09 =09 On 5/17/05, Willem de Boer < wdeboer@... <mailto:wdeboer@...> > wrote:=20 No, you assume the objective function can be locally accurately=20 represented by a quadratic function (ie., the first 3 terms of its Taylor series). Then you perform some sort of Newton step to find the next best approximate point.=20 =09 =09 
From: Alex Mohr <amohr@cs...>  20050519 00:03:08

Also related, this paper at SIGGRAPH this year uses this sort of rigid shape matching to do some nice, stable deformation stuff: http://graphics.ethz.ch/~brunoh/s2005.html Alex >Just thought I'd throw this in the mix since no one mentioned it. >If you just need the optimal 3x3 _transformation_ period and you don't care >whether it's SO(3), it's actually quite easy. > >Put all the original points in a 3xN matrix p, and all the corresponding >target points in 3xN matrix q. And subtract the centroid off both sets of >points. > >Then you basically want to find the 3x3 matrix, T, that solves: >T p = q > >Except generally it's overconstrained, and p is nonsquare so you can't >invert it. But you can do this: >T p p^t = q p^t >T (p p^t)(p p^t)^1 = q p^t (p p^t)^1 >T = q p^t (p p^t)^1 > >In other words just use the pseudoinverse of p, and that actually gives you >the least squares solution. The nice thing is (p p^t) is just a 3x3 matrix >so it's easy to invert. Of course if (p p^t) is singular, then you need a >backup plan. That happens whenever all the p points are collinear or >coplanar, >so it's not something you can generally ignore. I'm not sure what the most >appropriate backup plan is for those degenerate cases. Have to think about >it some more. > >bb > > >On 5/17/05, Bill Baxter <wbaxter@...> wrote: >> >> Oh, ok, so it becomes a standard unconstrained nonlinear optimization >> problem then. It sounded like you were saying the objective itself was >> quadratic. I see now. >> >> So their main idea is just to take each Newton optimization step using a >> local parameterization of the rotation (like R0 * >> incrementalRotation(param[3])) rather than doing the whole optimization with >> a fixed parameterization (like R(param[3]) ), where 'param[3]' represents >> your favorite 3parameter representation of rotations. So you could see the >> whole thing as not being so different from optimization on SO(3) with Euler >> angles, except they avoid the singularities by reparameterizing locally >> every step, and accumulating the progress made thus far into the R0 matrix. >> Makes sense. Not as spectactularly cool as it sounded intitially, though. >> >> Coincidentally, I took a robotics course from the second author, and just >> about the same time he was writing that paper, it appears. Small world. >> >> >> On 5/17/05, Willem de Boer <wdeboer@...> wrote: >> > >> > No, you assume the objective function can be locally accurately >> > represented by a quadratic function (ie., the first 3 terms of its >> > Taylor series). Then you perform some sort of Newton step to >> > find the next best approximate point. >> > >> > >> 