You can subscribe to this list here.
2001 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(8) 
_{Nov}
(8) 
_{Dec}
(4) 

2002 
_{Jan}
(53) 
_{Feb}
(15) 
_{Mar}
(51) 
_{Apr}
(54) 
_{May}
(41) 
_{Jun}
(48) 
_{Jul}
(32) 
_{Aug}
(22) 
_{Sep}
(61) 
_{Oct}
(31) 
_{Nov}
(31) 
_{Dec}
(27) 
2003 
_{Jan}
(45) 
_{Feb}
(18) 
_{Mar}
(25) 
_{Apr}
(39) 
_{May}
(34) 
_{Jun}
(20) 
_{Jul}
(13) 
_{Aug}
(16) 
_{Sep}
(18) 
_{Oct}
(14) 
_{Nov}
(17) 
_{Dec}
(13) 
2004 
_{Jan}
(53) 
_{Feb}
(12) 
_{Mar}
(38) 
_{Apr}
(29) 
_{May}
(72) 
_{Jun}
(38) 
_{Jul}
(41) 
_{Aug}
(11) 
_{Sep}
(21) 
_{Oct}
(30) 
_{Nov}
(35) 
_{Dec}
(14) 
2005 
_{Jan}
(66) 
_{Feb}
(14) 
_{Mar}
(24) 
_{Apr}
(50) 
_{May}
(40) 
_{Jun}
(29) 
_{Jul}
(37) 
_{Aug}
(27) 
_{Sep}
(26) 
_{Oct}
(58) 
_{Nov}
(43) 
_{Dec}
(23) 
2006 
_{Jan}
(84) 
_{Feb}
(36) 
_{Mar}
(24) 
_{Apr}
(42) 
_{May}
(20) 
_{Jun}
(41) 
_{Jul}
(40) 
_{Aug}
(42) 
_{Sep}
(23) 
_{Oct}
(38) 
_{Nov}
(31) 
_{Dec}
(28) 
2007 
_{Jan}
(11) 
_{Feb}
(34) 
_{Mar}
(14) 
_{Apr}
(29) 
_{May}
(45) 
_{Jun}
(5) 
_{Jul}
(10) 
_{Aug}
(6) 
_{Sep}
(38) 
_{Oct}
(44) 
_{Nov}
(19) 
_{Dec}
(22) 
2008 
_{Jan}
(37) 
_{Feb}
(24) 
_{Mar}
(29) 
_{Apr}
(14) 
_{May}
(24) 
_{Jun}
(47) 
_{Jul}
(26) 
_{Aug}
(4) 
_{Sep}
(14) 
_{Oct}
(45) 
_{Nov}
(25) 
_{Dec}
(16) 
2009 
_{Jan}
(33) 
_{Feb}
(34) 
_{Mar}
(45) 
_{Apr}
(45) 
_{May}
(30) 
_{Jun}
(47) 
_{Jul}
(37) 
_{Aug}
(19) 
_{Sep}
(15) 
_{Oct}
(16) 
_{Nov}
(24) 
_{Dec}
(31) 
2010 
_{Jan}
(32) 
_{Feb}
(25) 
_{Mar}
(12) 
_{Apr}
(5) 
_{May}
(2) 
_{Jun}
(9) 
_{Jul}
(31) 
_{Aug}
(10) 
_{Sep}
(12) 
_{Oct}
(20) 
_{Nov}
(6) 
_{Dec}
(41) 
2011 
_{Jan}
(23) 
_{Feb}
(8) 
_{Mar}
(41) 
_{Apr}
(8) 
_{May}
(15) 
_{Jun}
(10) 
_{Jul}
(8) 
_{Aug}
(14) 
_{Sep}
(16) 
_{Oct}
(13) 
_{Nov}
(15) 
_{Dec}
(8) 
2012 
_{Jan}
(6) 
_{Feb}
(14) 
_{Mar}
(22) 
_{Apr}
(40) 
_{May}
(27) 
_{Jun}
(18) 
_{Jul}
(2) 
_{Aug}
(6) 
_{Sep}
(10) 
_{Oct}
(32) 
_{Nov}
(5) 
_{Dec}
(2) 
2013 
_{Jan}
(14) 
_{Feb}
(2) 
_{Mar}
(15) 
_{Apr}
(2) 
_{May}
(6) 
_{Jun}
(7) 
_{Jul}
(25) 
_{Aug}
(6) 
_{Sep}
(3) 
_{Oct}

_{Nov}
(8) 
_{Dec}

2014 
_{Jan}
(3) 
_{Feb}
(3) 
_{Mar}
(3) 
_{Apr}

_{May}
(19) 
_{Jun}
(6) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(18) 
_{Oct}
(5) 
_{Nov}
(1) 
_{Dec}

2015 
_{Jan}
(2) 
_{Feb}
(4) 
_{Mar}
(2) 
_{Apr}
(1) 
_{May}
(17) 
_{Jun}
(1) 
_{Jul}

_{Aug}
(2) 
_{Sep}

_{Oct}

_{Nov}
(1) 
_{Dec}
(11) 
2016 
_{Jan}
(10) 
_{Feb}
(1) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(2) 
2
(5) 
3

4
(2) 
5
(1) 
6

7

8

9

10

11

12

13

14

15

16
(4) 
17
(2) 
18
(5) 
19
(6) 
20

21

22

23
(2) 
24

25
(1) 
26

27

28

29
(2) 
30
(3) 




From: Gehua Yang <yangg2@rp...>  20041118 18:02:02

First of all, Page 114, or Algorithm 3.7 is the Gold Standard Algorithm for estimating an AFFINE transformation, not homography. If homography is to be estimated, refer to Algorithm 3.3 on Page 98. There are two choices: Sampson error or Gold Standard error. For Sampson error, there are only 9 parameters(or one can choose other parameterization). As for Gold Standard error, there are 2n+9 variables. But there are 4n residuals: sum d(x_i, xhat_i)^2 + d(xpri_i, xhatpri_i)^2 Recall that LM takes a vector of residuals before squaring. for each d(.) function, there are two residuals. Another way to look at it is that each ideal point x hat brings in two parameters, but provides 4 constraints. Gehua  Original Message  From: Marc Anderson To: vxlusers@... Sent: Tuesday, November 16, 2004 9:58 PM Subject: [Vxlusers] Premise of vnl_levenberg_marquardt hold back Gold standard? > Hi, all vxl guys! > The class vnl_levenberg_marquardt checks if the number of parameters is > less than that of residuals before carrying out the minimization process, > otherwise it'll returns false. This requirement prevents the estimation of > homography between two images using the Gold Standard algorithm(H&Z book, > p114), in which case the number of parameters is 2n+9 while the number of > residuals is n. ( where n is the number of point correspondences ). Would > someone get the idea to figure this issue out? Thanks! 
From: Amitha Perera <perera@cs...>  20041118 15:52:05

On Wed 17 Nov 2004, Xiaowei Li wrote: > 1 All these methods, both in RPI libraries and Oxford > University Libraries, are all based on random sampling, right > ? The rpl/rrel implements both a random sampling search and a iteratively reweighted least squares search. Some objective functions, such as MUSE and RANSAC, can only be effectively minimized using random sampling because the objective functions are complex (MUSE) or discontinuous (RANSAC). Others, like MSAC, the BeatonTukey weight, and least squares, can be minimized using either approach. When both options are available, random sampling has an advantage in that it does not need an initial estimate. IRLS has an advantage that the minimum need not be defined by exactly a few data points. > 2 So, all these methods are different in residual computation and cost functions, including RANSAC itself, right? There are, conceptually, two phases to finding a robust minimum. First, you have to compute residuals. This depends on your problem. Then you run the residuals through the robust loss function, and minimize the this. If the residuals and scale are estimated the same way, RANSAC is RANSAC. (I don't know the code in mvl to comment further.) > 3 If yes, delving in the source code, the final estimate > value we get is surely a value solved from the minimal fit of > certain sample, right ? Yes. (For random sampling.) > 4 Is this estimate value reliable enough ? > It is an algebraic solution of certain equations in a > minimal fit. Though this fit must be made up from > inliers, noise still exists and the final value is > different from different minimal fits being used. There will always be an error in your estimates. They are, after all, estimates. The more samples you have, the more likely that random sampling will give an estimate close to the true value. Suppose you need n points for the minimal esteimate. The idea is that if you have m >> n samples, then it is likely that there are n of these m that have very little noise. If you find this set, you can get an estimate very close to the true value. As m gets larger, it becomes more and more likely that there are n samples with zero error; in this case, you can find the true solution. Of course, all this becomes more complicated because you can only evaluate the quality of your solution using the samples. However, as m gets larger, it becomes more and more likely that the only the true solution will minimize the error function. If you have enough samples, and your samples meet the expected noise models, the estimate should be quite good. > It will result in this: > 5 Though all the data are inliers, when we estimate certain parameters with them for many times, the final estimate value we get will be different, because we use different samples to generate the estimate value. > Is this statement right ? Yes. For a given sample set, different runs of a random sampling search will yield slightly different results. Different runs of an iterative search will generally yield the same result. However, this repeatability does not mean the iterative search gives you the "true" solution: get another sample set, and the estimates from both approaches are different. Remember that the data you have is not "truth". It is a set of noisy samples from the truth that you are trying to estimate. Therefore, any estimate will not equal truth. (At least, you cannot be certain of it.) Amitha. 
From: Swati Setty <swati_setty@mi...>  20041118 12:42:05

=  Disclaimer =  "This message(including attachment if any)is confidential and may be = privileged.Before opening attachments please check them for viruses and defects.MindTree Consulting Private Limited = (MindTree)will not be responsible for any viruses or defects or any forwarded attachments emanating either from within MindTree or = outside.If you have received this message by mistake please notify the = sender by return email and delete this message from your system. Any = unauthorized use or dissemination of this message in whole or in part is = strictly prohibited. Please note that emails are susceptible to change = and MindTree shall not be liable for any improper, untimely or = incomplete transmission." =  
From: Toon Goedeme <Toon.G<oedeme@es...>  20041118 08:51:50

Hi, I'm trying tu use the KLT tracker in contrib/gel/vgel, but it keeps on creating a segmentation fault. This is how I try to use it: vgel_kl_params* kl_params=new vgel_kl_params(); m_klt=new vgel_kl(*kl_params); vgel_multi_view_data_vertex_sptr klt_matches=new vgel_multi_view_data<vtol_vertex_2d_sptr>(); m_klt>match_sequence(im1, im2, klt_matches, true); But this gives the following output: Converting image to grey scale... width: 641 height: 427 pixel type: byte Converting image to grey scale... width: 641 height: 427 pixel type: byte (KLT) Selecting the 100 best features from a 641 by 427 image... 100 features found. (KLT) Tracking 100 features in a 641 by 427 image... 39 features successfully tracked. (KLT) Attempting to replace 61 features in a 641 by 427 image... 61 features replaced. Segmentation fault Does anyone know what I am doing wrong? Thanks Toon 
From: Xiaowei Li <nemesis@bi...>  20041118 03:49:34

Hi vxl guys, I have several questions about estimate methods in VXL. 1 All these methods, both in RPI libraries and Oxford University Libraries, are all based on random sampling, right ? This point can be demonstrated in the source code, as in RPIL, there is a random sampling class, and in oxl's mvl, there exists Monte_Carlo function to achieve random sampling. 2 So, all these methods are different in residual computation and cost functions, including RANSAC itself, right? 3 If yes, delving in the source code, the final estimate value we get is surely a value solved from the minimal fit of certain sample, right ? Of course this value have the "most" number of inliers among all the data. But, 4 Is this estimate value reliable enough ? It is an algebraic solution of certain equations in a minimal fit. Though this fit must be made up from inliers, noise still exists and the final value is different from different minimal fits being used. It will result in this: 5 Though all the data are inliers, when we estimate certain parameters with them for many times, the final estimate value we get will be different, because we use different samples to generate the estimate value. Is this statement right ? Thanks in advance.;) Xiaowei Li Regards, 