You can subscribe to this list here.
2001 
_{Jan}

_{Feb}
(1) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}


2002 
_{Jan}
(1) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}
(1) 
_{Dec}

2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}
(1) 
_{Sep}

_{Oct}
(83) 
_{Nov}
(57) 
_{Dec}
(111) 
2004 
_{Jan}
(38) 
_{Feb}
(121) 
_{Mar}
(107) 
_{Apr}
(241) 
_{May}
(102) 
_{Jun}
(190) 
_{Jul}
(239) 
_{Aug}
(158) 
_{Sep}
(184) 
_{Oct}
(193) 
_{Nov}
(47) 
_{Dec}
(68) 
2005 
_{Jan}
(190) 
_{Feb}
(105) 
_{Mar}
(99) 
_{Apr}
(65) 
_{May}
(92) 
_{Jun}
(250) 
_{Jul}
(197) 
_{Aug}
(128) 
_{Sep}
(101) 
_{Oct}
(183) 
_{Nov}
(186) 
_{Dec}
(42) 
2006 
_{Jan}
(102) 
_{Feb}
(122) 
_{Mar}
(154) 
_{Apr}
(196) 
_{May}
(181) 
_{Jun}
(281) 
_{Jul}
(310) 
_{Aug}
(198) 
_{Sep}
(145) 
_{Oct}
(188) 
_{Nov}
(134) 
_{Dec}
(90) 
2007 
_{Jan}
(134) 
_{Feb}
(181) 
_{Mar}
(157) 
_{Apr}
(57) 
_{May}
(81) 
_{Jun}
(204) 
_{Jul}
(60) 
_{Aug}
(37) 
_{Sep}
(17) 
_{Oct}
(90) 
_{Nov}
(122) 
_{Dec}
(72) 
2008 
_{Jan}
(130) 
_{Feb}
(108) 
_{Mar}
(160) 
_{Apr}
(38) 
_{May}
(83) 
_{Jun}
(42) 
_{Jul}
(75) 
_{Aug}
(16) 
_{Sep}
(71) 
_{Oct}
(57) 
_{Nov}
(59) 
_{Dec}
(152) 
2009 
_{Jan}
(73) 
_{Feb}
(213) 
_{Mar}
(67) 
_{Apr}
(40) 
_{May}
(46) 
_{Jun}
(82) 
_{Jul}
(73) 
_{Aug}
(57) 
_{Sep}
(108) 
_{Oct}
(36) 
_{Nov}
(153) 
_{Dec}
(77) 
2010 
_{Jan}
(42) 
_{Feb}
(171) 
_{Mar}
(150) 
_{Apr}
(6) 
_{May}
(22) 
_{Jun}
(34) 
_{Jul}
(31) 
_{Aug}
(38) 
_{Sep}
(32) 
_{Oct}
(59) 
_{Nov}
(13) 
_{Dec}
(62) 
2011 
_{Jan}
(114) 
_{Feb}
(139) 
_{Mar}
(126) 
_{Apr}
(51) 
_{May}
(53) 
_{Jun}
(29) 
_{Jul}
(41) 
_{Aug}
(29) 
_{Sep}
(35) 
_{Oct}
(87) 
_{Nov}
(42) 
_{Dec}
(20) 
2012 
_{Jan}
(111) 
_{Feb}
(66) 
_{Mar}
(35) 
_{Apr}
(59) 
_{May}
(71) 
_{Jun}
(32) 
_{Jul}
(11) 
_{Aug}
(48) 
_{Sep}
(60) 
_{Oct}
(87) 
_{Nov}
(16) 
_{Dec}
(38) 
2013 
_{Jan}
(5) 
_{Feb}
(19) 
_{Mar}
(41) 
_{Apr}
(47) 
_{May}
(14) 
_{Jun}
(32) 
_{Jul}
(18) 
_{Aug}
(68) 
_{Sep}
(9) 
_{Oct}
(42) 
_{Nov}
(12) 
_{Dec}
(10) 
2014 
_{Jan}
(14) 
_{Feb}
(139) 
_{Mar}
(137) 
_{Apr}
(66) 
_{May}
(72) 
_{Jun}
(142) 
_{Jul}
(70) 
_{Aug}
(31) 
_{Sep}
(39) 
_{Oct}
(98) 
_{Nov}
(133) 
_{Dec}
(44) 
2015 
_{Jan}
(70) 
_{Feb}
(27) 
_{Mar}
(36) 
_{Apr}
(11) 
_{May}
(12) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1

2

3
(3) 
4

5
(17) 
6
(10) 
7
(1) 
8

9

10

11

12
(3) 
13
(3) 
14
(1) 
15

16

17

18
(1) 
19

20

21
(2) 
22
(1) 
23
(3) 
24
(4) 
25
(1) 
26

27
(3) 
28

29

30

31





From: Christoph Bersch <usenet@be...>  20110523 12:19:08

Hi, is there a specific reason why the 'transparent' terminal option applies only to the pngcairo and not to the pdfcairo terminal? I removed the two respective lines in term/cairo.trm and it worked well for me:  gnuplotcvs/term/cairo.trm 20110523 13:48:46.000000000 +0200 +++ gnuplot/term/cairo.trm 20110523 14:00:16.000000000 +0200 @@ 299,13 +299,11 @@ break; case CAIROTRM_TRANSPARENT: c_token++;  if (!strcmp(term>name,"pngcairo"))  cairo_params>transparent = TRUE; + cairo_params>transparent = TRUE; break; case CAIROTRM_NOTRANSPARENT: c_token++;  if (!strcmp(term>name,"pngcairo"))  cairo_params>transparent = FALSE; + cairo_params>transparent = FALSE; break; case CAIROTRM_CROP: c_token++; I only tested it with set terminal pdfcairo transparent set output 'testtransparency.pdf' plot sin(x) But because there is an explicit test for the pngcairo terminal, I suppose there was a good reason for this? Christoph 
From: <plotter@pi...>  20110523 08:39:02

On 05/23/11 02:55, Daniel J Sebald wrote: > On 05/22/2011 01:14 PM, plotter@... wrote: >> On 05/21/11 23:04, Daniel J Sebald wrote: >>> On 05/21/2011 03:15 AM, plotter@... wrote: >>>> Hi, >>>> >>>> 'help fit' reports that the fit command uses Levenberg–Marquardt algo to >>>> do the fit. >>>> >>>> I think this raises an important question that very often overlooked by >>>> many users of leastsquares techniques even at maths PhD level. >>>> >>>> Such techniques often only optimised the y error rather than the >>>> perpendicular error from the line. This is implicitly assuming y >>>> uncertainty>> x uncertainty. While this condition is often satisfied >>>> in a controlled experiment there are many situations where this is not >>>> applicable and gets totally overlooked. >>>> >>>> A common case is scatter plots which are frequently used to seek a >>>> relations between two quantities , each with significant errors / >>>> uncertainties. >>>> >>>> In this situation the fitted line is "wrong". In fact it's the >>>> application that is wrong , hence the wrong result. This may or may not >>>> be apparent to the eye. >>>> >>>> I have seen this happen so many times (including once in a PhD thesis >>>> report!) that I think it needs a serious health warning in the doc. >>>> >>>> "Warning: using leastsquares inappropriately can seriously damage your >>>> reputation". ;) >>>> >>>> Firstly , could you confirm the basis on which this algo is applied in >>>> gnuplot? Does it only optimise vertical y residuals? >>> >>> Often it is an assumption that the independent variables are exact >>> measurements. Not true, typically, but if the variance is small and >>> homoscedastic, the two can probably be lumped together. I.e., we are >>> searching for a relationship: >>> >>> Y = f(X + eps1) + eps2 >>> ~= f(X) + C eps1 + residual + eps2 >>> ~= f(X) + (C eps1 + eps2) >>> >>> where hopefully the residual due to nonlinearity of the relationship is >>> small compared to other randomness. It's up to the user's judgment and >>> knowledge of the application to determine that. >>> >>> Anyway, your point is true of most software packages: details are so >>> often lacking. That's why it would be nice to have a set of white >>> papers to go along with the software so that people know exactly what >>> the algorithm is, both for the benefit of the user and other developers. >>> Most of the time it is "here's a hunk of code, use it at your own risk". >>> >>> Dan >>> >> >> Thanks, I've read up on that algo and clearly this is only doing NLLS on >> y residuals. >> >> So my suggestion is that this is made abundantly clear in the help text. >> I'm not suggesting your "while paper" but just some comment to the >> effect that 'fit' will not give correct results if there are non >> negligible errors in x values. >> >> It absolutely amazes me how few people realise this , even highly >> qualified ones, so this is not some pedantic nicety. >> >> Most people seem to think once they've heard of doing a least squares >> fit that's all there is to it and it's some magical formula that works >> for all cases. >> >> I suggest modifying the first paragraph of help fit with something like >> the following: >> >> >> >> The `fit` command can fit a usersupplied expression to a set of data >> points >> (x,z) or (x,y,z), using an implementation of the nonlinear leastsquares >> (NLLS) MarquardtLevenberg algorithm. Any userdefined variable >> occurring in >> the expression may serve as a fit parameter, but the return type of the >> expression must be real. >> >> >> >> new >> >> >> The `fit` command can fit a usersupplied expression to a set of data >> points >> (x,z) or (x,y,z), using an implementation of the nonlinear leastsquares >> (NLLS) MarquardtLevenberg algorithm. This algorithm optimises y >> residuals only and >> carries the implicit assumption that error/uncertainties in x are >> negligible. If that is not the case the fit may succeed but will give >> wrong results. > > I'm OK with the change, but for a few things. > > First, you stated 'optimizes y' and 'uncertainties in x', but please > check that this precisely describes the algorithm. The beginning of the > paragraph lists (x, z) or (x, y, z). The first expression has no 'y', > so what is optimized in that case? The second expression has (x, y, z), > so is it only the 'x' in that case that is assumed exact? Or is it both > 'x' and 'y'? Perhaps that sentence should be, "This algorithm optimises > z residuals only and carries the implicit assumption that > error/uncertainties in x (and y) are negligible." very good point , I was trying to keep it brief (and perhaps dumb it down) but it should correctly refer to dependent and independent variables. I was concerned that may mean the warning was lost on many users. Perhaps "independant variable (eg. x axis)" or similar would be better. > > Second, I would hold off using the statement "wrong results" and maybe > use "a poor fit". Saying the result is wrong means the algorithm is > broken, but it just provides numbers. The user is using the wrong tool. > Also, "wrong" means a bad fit in this context and one can get a bad > fit and misinterpret results even if uncertainty in x is small. No, I think wrong result is exactly the case. Wrong result does not mean the algo is "broken" it means the wrong tool was used and the result obtained was wrong. The result is not "poor" it is wrong. Watering down the language does not shift the blame. Maybe some wording like "wrong technique" could help underline the cause of the problem but I think "wrong" definitely needs to be in there. There's no hedging around the fact, it can be considerably wrong. > > Third, if there is an alternative approach for the case when x > uncertainty can't be ignored, keep the discussion going and maybe we can > add such a feature to the list of items we'd like to add in the future. > > Dan > Yes , I would love to propose that. Some kind of total least squares may be an option I don't know enough about that to make a concrete proposal. I suspect it may be too variable to include as a turn key option like fit but I would really like that option if it is possible. thanks for your thoughtful comments. 
From: Daniel J Sebald <daniel.sebald@ie...>  20110523 00:55:19

On 05/22/2011 01:14 PM, plotter@... wrote: > On 05/21/11 23:04, Daniel J Sebald wrote: >> On 05/21/2011 03:15 AM, plotter@... wrote: >>> Hi, >>> >>> 'help fit' reports that the fit command uses Levenberg–Marquardt algo to >>> do the fit. >>> >>> I think this raises an important question that very often overlooked by >>> many users of leastsquares techniques even at maths PhD level. >>> >>> Such techniques often only optimised the y error rather than the >>> perpendicular error from the line. This is implicitly assuming y >>> uncertainty>> x uncertainty. While this condition is often satisfied >>> in a controlled experiment there are many situations where this is not >>> applicable and gets totally overlooked. >>> >>> A common case is scatter plots which are frequently used to seek a >>> relations between two quantities , each with significant errors / >>> uncertainties. >>> >>> In this situation the fitted line is "wrong". In fact it's the >>> application that is wrong , hence the wrong result. This may or may not >>> be apparent to the eye. >>> >>> I have seen this happen so many times (including once in a PhD thesis >>> report!) that I think it needs a serious health warning in the doc. >>> >>> "Warning: using leastsquares inappropriately can seriously damage your >>> reputation". ;) >>> >>> Firstly , could you confirm the basis on which this algo is applied in >>> gnuplot? Does it only optimise vertical y residuals? >> >> Often it is an assumption that the independent variables are exact >> measurements. Not true, typically, but if the variance is small and >> homoscedastic, the two can probably be lumped together. I.e., we are >> searching for a relationship: >> >> Y = f(X + eps1) + eps2 >> ~= f(X) + C eps1 + residual + eps2 >> ~= f(X) + (C eps1 + eps2) >> >> where hopefully the residual due to nonlinearity of the relationship is >> small compared to other randomness. It's up to the user's judgment and >> knowledge of the application to determine that. >> >> Anyway, your point is true of most software packages: details are so >> often lacking. That's why it would be nice to have a set of white >> papers to go along with the software so that people know exactly what >> the algorithm is, both for the benefit of the user and other developers. >> Most of the time it is "here's a hunk of code, use it at your own risk". >> >> Dan >> > > Thanks, I've read up on that algo and clearly this is only doing NLLS on > y residuals. > > So my suggestion is that this is made abundantly clear in the help text. > I'm not suggesting your "while paper" but just some comment to the > effect that 'fit' will not give correct results if there are non > negligible errors in x values. > > It absolutely amazes me how few people realise this , even highly > qualified ones, so this is not some pedantic nicety. > > Most people seem to think once they've heard of doing a least squares > fit that's all there is to it and it's some magical formula that works > for all cases. > > I suggest modifying the first paragraph of help fit with something like > the following: > > >> > The `fit` command can fit a usersupplied expression to a set of data > points > (x,z) or (x,y,z), using an implementation of the nonlinear leastsquares > (NLLS) MarquardtLevenberg algorithm. Any userdefined variable > occurring in > the expression may serve as a fit parameter, but the return type of the > expression must be real. > >> > > new > >> > The `fit` command can fit a usersupplied expression to a set of data > points > (x,z) or (x,y,z), using an implementation of the nonlinear leastsquares > (NLLS) MarquardtLevenberg algorithm. This algorithm optimises y > residuals only and > carries the implicit assumption that error/uncertainties in x are > negligible. If that is not the case the fit may succeed but will give > wrong results. I'm OK with the change, but for a few things. First, you stated 'optimizes y' and 'uncertainties in x', but please check that this precisely describes the algorithm. The beginning of the paragraph lists (x, z) or (x, y, z). The first expression has no 'y', so what is optimized in that case? The second expression has (x, y, z), so is it only the 'x' in that case that is assumed exact? Or is it both 'x' and 'y'? Perhaps that sentence should be, "This algorithm optimises z residuals only and carries the implicit assumption that error/uncertainties in x (and y) are negligible." Second, I would hold off using the statement "wrong results" and maybe use "a poor fit". Saying the result is wrong means the algorithm is broken, but it just provides numbers. The user is using the wrong tool. Also, "wrong" means a bad fit in this context and one can get a bad fit and misinterpret results even if uncertainty in x is small. Third, if there is an alternative approach for the case when x uncertainty can't be ignored, keep the discussion going and maybe we can add such a feature to the list of items we'd like to add in the future. Dan 