You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(32) |
Oct
(144) |
Nov
(14) |
Dec
(44) |
| 2002 |
Jan
(16) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(65) |
Nov
(4) |
Dec
(30) |
| 2003 |
Jan
(84) |
Feb
(101) |
Mar
(58) |
Apr
(30) |
May
(138) |
Jun
(336) |
Jul
(36) |
Aug
(12) |
Sep
(8) |
Oct
(4) |
Nov
(12) |
Dec
(12) |
| 2004 |
Jan
(186) |
Feb
(274) |
Mar
(248) |
Apr
(18) |
May
(104) |
Jun
(48) |
Jul
(144) |
Aug
(98) |
Sep
(60) |
Oct
(72) |
Nov
(32) |
Dec
(130) |
| 2005 |
Jan
(84) |
Feb
(130) |
Mar
(50) |
Apr
(106) |
May
(240) |
Jun
(154) |
Jul
(66) |
Aug
(82) |
Sep
(36) |
Oct
(18) |
Nov
(14) |
Dec
(4) |
| 2006 |
Jan
(68) |
Feb
(2) |
Mar
(14) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(50) |
Dec
(4) |
| 2007 |
Jan
(14) |
Feb
(42) |
Mar
(70) |
Apr
(30) |
May
(8) |
Jun
|
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(88) |
Nov
(168) |
Dec
(2) |
| 2008 |
Jan
(56) |
Feb
(372) |
Mar
(446) |
Apr
(112) |
May
(144) |
Jun
(94) |
Jul
(208) |
Aug
(90) |
Sep
(26) |
Oct
(10) |
Nov
(2) |
Dec
|
| 2009 |
Jan
|
Feb
(8) |
Mar
|
Apr
(46) |
May
(188) |
Jun
(120) |
Jul
(448) |
Aug
(202) |
Sep
(4) |
Oct
(72) |
Nov
(154) |
Dec
(2) |
| 2010 |
Jan
(58) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(68) |
Aug
(24) |
Sep
|
Oct
|
Nov
|
Dec
(11) |
| 2011 |
Jan
(6) |
Feb
(11) |
Mar
(8) |
Apr
(10) |
May
(4) |
Jun
|
Jul
|
Aug
(8) |
Sep
|
Oct
(3) |
Nov
(2) |
Dec
|
| 2012 |
Jan
|
Feb
(13) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(31) |
Aug
(21) |
Sep
(2) |
Oct
(1) |
Nov
(29) |
Dec
(17) |
| 2013 |
Jan
(2) |
Feb
|
Mar
|
Apr
(25) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(3) |
Oct
(4) |
Nov
(11) |
Dec
|
| 2016 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
|
From: Chia-liang K. <cl...@cl...> - 2010-07-27 17:28:08
|
Hi, On 27 July 2010 23:16, Raphael Hertzog <ra...@ou...> wrote: > Hi, > > Le mardi 27 juillet 2010, Bja...@no... a écrit : >> Like mentioned previously, there are several choices when it comes to >> hosting the code, here are a few: > > We also need to consider the website and the mailing lists at least. > > If we switch VCS, I have a clear preference for Git so even if I'm not > active in the development, I'm happy to make the choice for the team if > you can't pick up a common preferred VCS. FYI, i actually have the repository already mirrored on github: http://github.com/clkao/finance-geniustrader this contains the svn mirror (trunk and cpan branch), as well as all my patches maintained in separate branches. we can start from here, either clone the current mainline branches, or simply use it as the published repository. for the latter, i'm willing to give commit bits to people and perhaps help merging patches if my time allows. Cheers, CLK |
|
From: Raphael H. <ra...@ou...> - 2010-07-27 17:17:09
|
Hi, Le mardi 27 juillet 2010, Bja...@no... a écrit : > Like mentioned previously, there are several choices when it comes to > hosting the code, here are a few: We also need to consider the website and the mailing lists at least. If we switch VCS, I have a clear preference for Git so even if I'm not active in the development, I'm happy to make the choice for the team if you can't pick up a common preferred VCS. > I tend to prefer code.google.com's clean and simplistic layout, and I > think it has all the functionality we need. I've set up a project on > code.google.com (http://code.google.com/p/geniustrader/) (just for > testing for now, I'll take it down in case you don't like it!) - Project > members needs to have a google account. Let me know your google acc, in > case you want to take it for a spin. But given my git preference and the ML requirement (which should be hosted on googlegroups.com then), I'm not convinced it's the best choice. Sourceforge is very generic and probably the best default choice if there's no consensus on the forge to pick. > Android is using git + Gerrit for code review > (https://review.source.android.com), pretty nice combo! I'm not sure if > there are any free hosted Gerrit solutions available... Gerrit is heavy to setup AFAIK. We will also have to decide who is admin on the new forge. Does someone want to assume the leadership or shall we simply make all committers co-admins? Cheers, -- Raphaël Hertzog ◈ Writer/Consultant ◈ Debian Developer ◈ [Flattr=20693] Master Your Debian/Ubuntu System ▶ http://RaphaelHertzog.com (English) ▶ http://RaphaelHertzog.fr (Français) |
|
From: <Bja...@no...> - 2010-07-27 14:23:50
|
Hi, I would like to volunteer as well. I work with SCM (software configuration management) and have amongst other things been involved in setting up the SCM solution for the Symbian Foundation (tech-lead on Mercurial), so I have quite a bit of experience. I have been thinking about suggesting switching to a dvcs (distributed version control system) for quite a while. The distributed model allows for a much more flexible workflow, e.g. better branch support. The dvcs tools, such as Git and Mercurial, can be used in much the same way as SVN, so the learning curve doesn't necessarily have to be that high. I can convert the current repo and provide all the instructions needed, in case we decide to go in this direction. I actually did convert the repo to git some time ago, because I wanted to be able to have my experimental work under version control, but never really got around to any serious work with it. Like mentioned previously, there are several choices when it comes to hosting the code, here are a few: http://SourceForge.net: Support for various VCS: SVN, Git, Mercurial, and a few others... http://Code.google.com: Supported VCS: SVN or Mercurial http://GitHub.com VCS: Git http://bitbucket.org/ VCS: Mercurial All of the above has the possibility hosting bugtracker, documentation etc... I tend to prefer code.google.com's clean and simplistic layout, and I think it has all the functionality we need. I've set up a project on code.google.com (http://code.google.com/p/geniustrader/) (just for testing for now, I'll take it down in case you don't like it!) - Project members needs to have a google account. Let me know your google acc, in case you want to take it for a spin. Android is using git + Gerrit for code review (https://review.source.android.com), pretty nice combo! I'm not sure if there are any free hosted Gerrit solutions available... We could of cause also continue with Trac for bugtracking etc, any strong feelings? Ras, there are tons of information regarding various aspects of running open source projects, just do a google search ;-) I'll be happy to provide you with info and support when it comes to running open source projects! Cheers, Bjarne -----Original Message----- From: ext Robert A. Schmied [mailto:ra...@ac...] Sent: 26. juli 2010 22:29 To: Raphael Hertzog Cc: de...@ge... Subject: [GT] Re: Hosting the project Raphael Hertzog wrote: > (Please cc me I don't read the list very often) > > Hello, > > I will move all the services running on my server to a new (virtualized) > server but I don't want to continue to host everything related to > GeniusTrader on my new host. > > So we have to find a new solution. I already mentionned the idea of using > a forge (sourceforge.net or alioth.debian.org or pick your favorite) but > no decision was taken. > > Now it's time to decide and to do the move. I can help but you must decide > what you prefer. > > Cheers, aloha raphael i'm inclined to volunteer, but like to know more about the nature of the obligation etc. is there a good primer on 'how to run an opensource project' that you would recommend? in the meantime i will research the topic on my own and see if i can get find a comfort level ... ras |
|
From: Robert A. S. <ra...@ac...> - 2010-07-26 22:29:55
|
Raphael Hertzog wrote: > (Please cc me I don't read the list very often) > > Hello, > > I will move all the services running on my server to a new (virtualized) > server but I don't want to continue to host everything related to > GeniusTrader on my new host. > > So we have to find a new solution. I already mentionned the idea of using > a forge (sourceforge.net or alioth.debian.org or pick your favorite) but > no decision was taken. > > Now it's time to decide and to do the move. I can help but you must decide > what you prefer. > > Cheers, aloha raphael i'm inclined to volunteer, but like to know more about the nature of the obligation etc. is there a good primer on 'how to run an opensource project' that you would recommend? in the meantime i will research the topic on my own and see if i can get find a comfort level ... ras |
|
From: Raphael H. <ra...@ou...> - 2010-07-26 17:04:57
|
Hi, Le lundi 26 juillet 2010, Thomas Weigert a écrit : > sourceforge.net is fine. I can also host this on one of my servers (mail > list, svn, web host) if that is preferable. I'd rather avoid using a personal server IMO. So who's going to create the project on sourceforge and move everything? I can provide archives of everything and list of emails to move the mailing lists. I can also update the DNS for the domain and create email alias to the new list once they are setup. Cheers, -- Raphaël Hertzog ◈ Writer/Consultant ◈ Debian Developer ◈ [Flattr=20693] Master Your Debian/Ubuntu System ▶ http://RaphaelHertzog.com (English) ▶ http://RaphaelHertzog.fr (Français) |
|
From: Thomas W. <we...@ms...> - 2010-07-26 15:43:21
|
Raphael, sourceforge.net is fine. I can also host this on one of my servers (mail list, svn, web host) if that is preferable. Best regards, Th. On 07/26/2010 03:36 PM, Raphael Hertzog wrote: > (Please cc me I don't read the list very often) > > Hello, > > I will move all the services running on my server to a new (virtualized) > server but I don't want to continue to host everything related to > GeniusTrader on my new host. > > So we have to find a new solution. I already mentionned the idea of using > a forge (sourceforge.net or alioth.debian.org or pick your favorite) but > no decision was taken. > > Now it's time to decide and to do the move. I can help but you must decide > what you prefer. > > Cheers, > |
|
From: Raphael H. <ra...@ge...> - 2010-07-26 15:36:38
|
(Please cc me I don't read the list very often) Hello, I will move all the services running on my server to a new (virtualized) server but I don't want to continue to host everything related to GeniusTrader on my new host. So we have to find a new solution. I already mentionned the idea of using a forge (sourceforge.net or alioth.debian.org or pick your favorite) but no decision was taken. Now it's time to decide and to do the move. I can help but you must decide what you prefer. Cheers, -- Raphaël Hertzog ◈ Writer/Consultant ◈ Debian Developer ◈ [Flattr=20693] Master Your Debian/Ubuntu System ▶ http://RaphaelHertzog.com (English) ▶ http://RaphaelHertzog.fr (Français) |
|
From: will d. <wil...@gm...> - 2010-07-25 10:46:23
|
Thanks for this Nick. Much appreciated On Sat, Jul 24, 2010 at 5:18 AM, Nick Fantes Huege <nf...@gm...>wrote: > When GT/Conf.pm reads the configuration file, it assumes that > statements that end in a backslash continue on the next line. It also > allows for inline comments behind a pound sign. > > Currently multi-line statements have to have the \ symbol as their > very last character, otherwise they won't concatenate the next line. > So, for example if you have a couple of spaces after the ending \, it > won't work. > Similar problem with the inline comments. If you have a # Comment > right behind the \ sign, it won't work properly. > > Attached is a fixed version of Conf.pm, with this and other minor > modifications. Here is also a Pastebin http://pastebin.com/XZvzWnjY > This module is completely backwards compatible with the old one. > > The changes are: > 1. Fixed multi-line statements with inline comments > 2. Improved _get_home_path function (for Windows) > 3. Added sub vars which returns a hash_ref to all config options. > Sometimes it's easier to use $conf->{'db::module'} instead > >::Conf::get('DB::module'), because the former can be interpolated > in strings. > 4. Minor code reorders and improvements. > > Note that I have removed all comments and pod from the module, because > for some reason they mess up my debugger. > If you find this patch to be useful, please feel free to add all the > commends and pod back. > > Best regards, > Nick > |
|
From: Nick F. H. <nf...@gm...> - 2010-07-24 23:11:06
|
ras, Attached is the corrected version of Conf.pm. It performs better than the original one based on your test configuration file. I also put all comments and pod back. Regards, Nick |
|
From: Robert A. S. <ra...@ac...> - 2010-07-24 06:43:10
|
Nick Fantes Huege wrote: > When GT/Conf.pm reads the configuration file, it assumes that > statements that end in a backslash continue on the next line. It also > allows for inline comments behind a pound sign. > > Currently multi-line statements have to have the \ symbol as their > very last character, otherwise they won't concatenate the next line. > So, for example if you have a couple of spaces after the ending \, it > won't work. > Similar problem with the inline comments. If you have a # Comment > right behind the \ sign, it won't work properly. > > Attached is a fixed version of Conf.pm, with this and other minor > modifications. Here is also a Pastebin http://pastebin.com/XZvzWnjY > This module is completely backwards compatible with the old one. > > The changes are: > 1. Fixed multi-line statements with inline comments aloha nick i like the improved handling of continued lines with embedded comments and the handling of inadvertent whitespace following the '\' char. but there are a couple of issues to note with the filtering: using attached new_multiline and the input file ml-options % new_multiline > & /tmp/new_processed_options.out the new filter fails to terminate lines continued but 'terminated' by a trailing blank line (1 or more) (MULTILINE_TEST line 1 and MULTILINE_TEST line 1d) the blank line reset is a fallback continuation line reset i use it frequently especially when 'tweaking' on new aliases ... the filter is generating an extraneous key named '#' with a null value (''key:#: val::'') as an alternate test approach review perl script new_conf_dump.pl: with the new GT::Conf::load sub appended onto ../GT/Conf.pm and renamed GT::Conf::new_load (or a similar adjustment) one can do a direct comparison of differences between these load versions using the GT::Conf::conf_dump() method. this is probably a better way to verify the functionality of a GT::Conf::load replacement. > 2. Improved _get_home_path function (for Windows) > 3. Added sub vars which returns a hash_ref to all config options. > Sometimes it's easier to use $conf->{'db::module'} instead > >::Conf::get('DB::module'), because the former can be interpolated > in strings. is there an example of item 3 you want to point out? > 4. Minor code reorders and improvements. > > Note that I have removed all comments and pod from the module, because > for some reason they mess up my debugger. > If you find this patch to be useful, please feel free to add all the > commends and pod back. this removal is rather annoying -- makes for a lot more work ... isn't there a way you can work around this? > > Best regards, > Nick > <snip> ras |
|
From: Thomas W. <we...@ms...> - 2010-07-24 00:58:56
|
Please find attached... Th. On 07/23/2010 07:57 PM, Nick Fantes Huege wrote: > Thomas, > > You're right! EMA is calculated exactly as you describe. I have no > problem with that, because it seems to be the right way to do it. The > user should to be aware of the fact that --start, --end and --nb-item > limit the range of the calculations. > > I didn't receive your modified EMA (EMA2.pm?). Did you attach it? > > Best regards, > Nick > > |
|
From: Nick F. H. <nf...@gm...> - 2010-07-23 21:19:23
|
When GT/Conf.pm reads the configuration file, it assumes that statements that end in a backslash continue on the next line. It also allows for inline comments behind a pound sign. Currently multi-line statements have to have the \ symbol as their very last character, otherwise they won't concatenate the next line. So, for example if you have a couple of spaces after the ending \, it won't work. Similar problem with the inline comments. If you have a # Comment right behind the \ sign, it won't work properly. Attached is a fixed version of Conf.pm, with this and other minor modifications. Here is also a Pastebin http://pastebin.com/XZvzWnjY This module is completely backwards compatible with the old one. The changes are: 1. Fixed multi-line statements with inline comments 2. Improved _get_home_path function (for Windows) 3. Added sub vars which returns a hash_ref to all config options. Sometimes it's easier to use $conf->{'db::module'} instead >::Conf::get('DB::module'), because the former can be interpolated in strings. 4. Minor code reorders and improvements. Note that I have removed all comments and pod from the module, because for some reason they mess up my debugger. If you find this patch to be useful, please feel free to add all the commends and pod back. Best regards, Nick |
|
From: Nick F. H. <nf...@gm...> - 2010-07-23 19:58:33
|
Thomas, You're right! EMA is calculated exactly as you describe. I have no problem with that, because it seems to be the right way to do it. The user should to be aware of the fact that --start, --end and --nb-item limit the range of the calculations. I didn't receive your modified EMA (EMA2.pm?). Did you attach it? Best regards, Nick |
|
From: Thomas W. <we...@ms...> - 2010-07-23 13:22:38
|
You might want to give an alternative value if you are using a very long
EMA, which will then require a very long SMA for the first day. In order
to not loose a lot of data due to the dependencies, one could use, say,
the closing value of the first day as an approximation to EMA (if that
day is long ago there will not be a big difference).
You would use this as follows:
./display_indicator.pl --nb-item=1 I:EMA SPY 20 {I:Prices CLOSE}
{I:Prices CLOSE}
Calculating indicator EMA[20, {I:Prices CLOSE}, {I:Prices CLOSE}] ...
EMA[20, {I:Prices CLOSE}, {I:Prices CLOSE}][2010-07-16] = 106.6600
compare this with
./display_indicator.pl --nb-item=1 I:EMA SPY 20
Calculating indicator EMA[20, {I:Prices CLOSE}] ...
EMA[20, {I:Prices CLOSE}][2010-07-16] = 107.2465
For 1 day, of course, there will be a big difference.
Th.
On 07/23/2010 12:19 AM, Thomas Weigert wrote:
>
> Now say you want to get a 200 day EMA, and you decided that N is 10. To
> get the EMA for the first day you would have to have already 200 days
> before that, and in the case of EMA theoretically infinitely many, as
> the EMA does not really stop. The way GT handles this is that on the
> first day it uses the 200 day SMA (or whatever you give it as the third
> argument, see the documentation).
|
|
From: Thomas W. <we...@ms...> - 2010-07-23 13:16:04
|
As discussed, EMA approximates the starting value by using SMA (or any
other measure given as third argument).
Note that the current definition of EMA takes the first point of the
interval as the starting value, meaning that the first point in the
interval is calculated by SMA.
One could argue that for a N-day EMA, the starting value should be the
first point of the interval - N, so that the first point in the interval
would be a fully calculated EMA.
I am attaching a modification of EMA that does this.
Note that
display_indicator.pl --nb-item=1 I:EMA2 <Market> M
is the same as the value of
display_indicator.pl --nb-item=M+1 I:EMA2 <Market> M
by definition.
Maybe you will find this interpretation of EMA more intuitive?
Th.
On 07/23/2010 12:19 AM, Thomas Weigert wrote:
>
> As you can see, it settles down eventually. But the key to understand
> here is that EMA will give you a different value depending on the number
> of data items you examine. That is why I don't like to use EMA, unless I
> look at at least 5 years of data. At a minimum, you should use at least
> as much data as the period of the EMA.
>
>
|
|
From: Thomas W. <we...@ms...> - 2010-07-23 00:21:16
|
Guys, please, you are just speculating rather than studying the code. Here is how EMA works: You decide how many days you want to calculate the EMA over. You can determine this by using --nb-item, --start, --end, --full. It does not matter. In any case you end up with a number of days, N, that you want to evaluate the EMA over. Now say you want to get a 200 day EMA, and you decided that N is 10. To get the EMA for the first day you would have to have already 200 days before that, and in the case of EMA theoretically infinitely many, as the EMA does not really stop. The way GT handles this is that on the first day it uses the 200 day SMA (or whatever you give it as the third argument, see the documentation). The next day, it uses the EMA formula alpha = 2 / ( N + 1 ) EMA[n] = EMA[n-1] + alpha * ( INPUT - EMA[n-1] ) and so on for every day up to N. If you set --nb-item=1, this is obviously not very precise, as it gives you the same as the SMA. As far as I know, every other tool handles the EMA the same way. The difference comes in when you compare to say, stockcharts.com. There you have no control over the number of days you evaluate the EMA over, maybe they use 100 days or 1000 days or whatever. But the value of EMA depends critically on how many days you evaluate over, due to its definition. I am confident that EMA in GT is defined correctly, and will give you the same results as other tools, assuming you test for the same data. Just to illustrate, if you compute the EMA(200) of SPY, you will get, for different sized intervals: 1 = 110.7929 200 = 108.0074 400 = 109.0458 600 = 108.9586 800 = 108.9507 1000 = 108.9509 As you can see, it settles down eventually. But the key to understand here is that EMA will give you a different value depending on the number of data items you examine. That is why I don't like to use EMA, unless I look at at least 5 years of data. At a minimum, you should use at least as much data as the period of the EMA. Please study the definition of EMA and the code, GT::Indicators::EMA, and this will all make sense. Again, if you don't like to use SMA for the initial value of EMA, you can give it any other indicator as an argument. And, one more thing, --nb-item is no different from --start or --end. All you do is select how many data items you are interested in. Best regards, Th. On 07/22/2010 09:58 PM, Nick Fantes Huege wrote: > I did some more testing with --nb-item, but I only got more confused. > I think we should not use it at all. It seems, as RAS said, it is only > useful in graphic.pl, but it can be specifically added there. > I would even go further as to suggest its complete removal from > Tools::find_calculator, to avoid any confusion and wrong results, but > I guess that may be a bit too harsh. > > Regards, > Nick > > |
|
From: Nick F. H. <nf...@gm...> - 2010-07-22 22:00:02
|
I did some more testing with --nb-item, but I only got more confused. I think we should not use it at all. It seems, as RAS said, it is only useful in graphic.pl, but it can be specifically added there. I would even go further as to suggest its complete removal from Tools::find_calculator, to avoid any confusion and wrong results, but I guess that may be a bit too harsh. Regards, Nick |
|
From: Nick F. H. <nf...@gm...> - 2010-07-22 21:19:17
|
I've been keeping an eye on GT for a while, but just recently got
really involved in it. The reason I started analyzing the Perl code
was because I couldn't find any real documentation on how to use the
system. So I started reading the code.
> i will evaluate your changes, however, you will find similar code in most
> of the other 'new()' methods for all data objects implemented by gt.
> GT::Analyzers
> GT::CloseStrategy
> ...
I haven't gotten to those yet, but it seems like they are all using
99% for the same code, at least for the constructor of the object. The
Object Oriented approach to this would be to write the common code in
a parent class once and then inherit it where needed. Should be an
easy fix ...
> so if we really want to make this change (i'm not against it, i have gotten
> more than one headache staring at those 'new()' method code blocks than i
> care to remember), but to do the right thing we really should fix all of
> them at the same time.
What these code blocks do is:
1. Check for the passed arguments
2. Find if the number of arguments is sufficient and if not add from a
predefined list of default arguments.
For example:
./display_indicator.pl I:SMA YHOO 40
silently adds a second argument of {I:Prices CLOSE}
This is neat, because one can explicitly add the second argument, for example:
./display_indicator.pl I:SMA YHOO 21 {I:RSI 14}
This will give you the 21 day moving average of the RSI(14) for Yahoo.
> how much evaluation testing have you conducted (and how) to show this change
> is equivalent?
I tested with all display_*.pl files and I get the same results as the
previous code, which doesn't really surprise me because anyway you
look at it both codes do the exact same thing. The one I propose is
just a little shorter and more compatible.
> as far as versions go -- i will try to always refer to the gt trunk version
> not the cpan or exp branches.
I saw GT on cpan, but I didn't think it was a good idea. At least I
don't see the reason for it.
I'm using git to keep track of my modifications, and I also work on
the GT trunk version.
Best regards,
Nick
|
|
From: Robert A. S. <ra...@ac...> - 2010-07-22 19:39:20
|
Nick Fantes Huege wrote: > The constructor function 'new' for GT::Indicators attempts to > reference the variable $args in a slightly illegal way. To avoid an > interpreter error the author has added 'no strict "refs"' to the code. > > To improve compatibility and to keep "strict" turned on at all times, > I have modified the sub as follows: > << big snip >> nick i will evaluate your changes, however, you will find similar code in most of the other 'new()' methods for all data objects implemented by gt. GT::Analyzers GT::CloseStrategy GT::Indicators GT::OrderFactory GT::Signals GT::Systems GT::TradeFilters GT::DB::CSV maybe even GT::Indicators::Generic::ByName maybe even GT::Indicators::Generic::Container so if we really want to make this change (i'm not against it, i have gotten more than one headache staring at those 'new()' method code blocks than i care to remember), but to do the right thing we really should fix all of them at the same time. how much evaluation testing have you conducted (and how) to show this change is equivalent? as far as versions go -- i will try to always refer to the gt trunk version not the cpan or exp branches. ras |
|
From: Robert A. S. <ra...@ac...> - 2010-07-22 19:24:16
|
Nick Fantes Huege wrote:
> It seems that the root of our misunderstanding was that you read the
> pod and I read the Perl code. Since this is the development mailing
> list, I just wanted to point out something I thought was a logical
> error in the Perl code.
>
> There is already --max-loaded-items, which clearly states (and I have
> verified its claims by analyzing the Perl code) that it controls the
> number of periods (back from the last period) that are loaded for a
> given market from the data base. This option is effective only for
> certain data base modules and ignored otherwise.
>
> Why would anyone want to use both --max-loaded-items and --nb-item?
> The first one already intelligently limits the data pulled from the
> database, and the second one further creates a subset from the already
> limited data. This results in displaying wrong data in the example
> when calculating EMA.
>
> Just to clarify something:
>
>
>> ./display_indicator.pl --nb-item=1 I:EMA YHOO 200
>>
>>is wrong, because the --nb-item=1 will eliminate most input data,
>>but the command
>>
>> ./display_indicator.pl --last-record I:EMA YHOO 200
>>
>>is correct, because it correctly uses the default analysis range,
>>but only outputs the last record.
>
>
> Both commands are absolutely equal because --last-record is a
> synonymous for --nb-item=1
>
> Best regards,
> Nick
>
nick
i have to agree with you that --nb-item when used with display_indicator.pl
is again (still?) showing discrepant results,
% ../svn_repo/Scripts/display_indicator.pl I:EMA YHOO 200 | tail -1
EMA[200, {I:Prices CLOSE}][2010-07-21] = 15.6130
% ../svn_repo/Scripts/display_indicator.pl --last I:EMA YHOO 200 | tail -1
EMA[200, {I:Prices CLOSE}][2010-07-21] = 16.0171
% ../svn_repo/Scripts/display_indicator.pl --nb-item=1 I:EMA YHOO 200 | tail -1
EMA[200, {I:Prices CLOSE}][2010-07-21] = 16.0171
so i fall back on my previous statement:
> historically option --nb-items has had a tortured history of purpose, use and
> implementation.
but whether this is a fault of --nb-item=1 --last or I:EMA or something else
is still to be determined ...
assuming it's --nb-item before 'fixing' it is undertaken, maybe a discussion
about what it's purpose is (or should be) would be a better place to start:
i still think it's (and supposed to be) a way to anchor one end of the input
data range without resorting to a date string. this is important and significant
because (at least for me) it is easier to determine i want 120 bars in the chart
ending on 1jul08 than for me to figure out what that starting date would be.
the use of Date::Manip has reduced the need for --nb-items, but it still might
be useful in this manner.
however, i get the sense that based on your reading of the code you believe
it should (maybe it actually does) dictate the amount of data to report. i
maintain that wasn't the intent, but maybe what has actually happened
(i dont think so in most instances, but maybe in one or some) so that's where
we should probably start off the discussion?
i see no point in an option that limits the amount of output data from apps
like display_*.pl when a simple pipeline would perform just as well.
i don't see any point for such an option for scan.pl or any of the backtests.
for graphic.pl it might be useful but i think there are larger issues with
graphic.pl than just limiting the chart size.
i do see a real need for being able to anchor one end of the processing date
range with a simple integer value that represents the number of bars in the
analysis period.
as far as --max-loaded-items this mechanism is outside of the handling of any
associated data dependencies, so if you have a 200 day sma but set
--max-loaded-items to less than the full start-end data range plus 200 days
(might be 201) the first sma values will be wrong.
the need for --max-loaded-items is to limit (within reasonable limits) the
data read from a stocks database that may go as far back as the early 1920s.
this option is not supported, simply ignored with file based data bases.
when correctly implemented using both --max-loaded-items and --nb-items plus
one of --start or --end can effectively limit the amount of unnecessary data
being read from and stored by gt, and set the analysis range using one date
string and a number of bars to consider in the analysis. note that --nb-items
is handled within the context of the specified gt data dependencies, so any
additional data needed to satisfy an indicator that needs 200 days prior to
--start is *supposed* to be available in the 'analysis range'.
also note carefully thomas weigert comments about ema being a indicator that
is hard to start ... we believe the gt implementation is reasonable and technically
correct, except maybe for a corner case or two when dealing with atypical conditions.
aloha
ras
|
|
From: Nick F. H. <nf...@gm...> - 2010-07-22 18:56:36
|
Thomas,
Thank you for your reply!
I am beginning to think that we are looking at different versions of GT.
> --max-loaded-items is an optimization to avoid loading records from the
> data base that are not needed for analysis. It has no impact on analysis
> other than these records will not be available. Note that you have to be
> careful in using it, as the analysis might need more data than one
> thinks, due to dependencies of indicators, for example. It has no effect
> for text data.
It seems that for the dependencies of indicators GT automatically
changes max-loaded-items to -1, ignoring the user defined value. This
way it grabs all of the available data. Note that I'm testing with
DB::module=genericdbi, using SQLite.
> When you use these parameters in a script, they apply to the base
> market. They do NOT dictate how much other data is considered. For
> example, a 200 day indicator will load much more data, even if you say
> you just want the last 10 days of analysis.
I disagree. The following line:
./display_indicator.pl --nb-item=1 I:EMA YHOO 200
considers in its calculations only one data item. I tested, retested
and finally traced it with the Perl debugger. It bases all
calculations on the last day price of YHOO.
I still think that at the very least this should print a warning.
> Regarding EMA, you have to be careful. Technically, EMA needs data from
> the first day of the market, as it goes back indefinitely. Most
> implementations of EMA limit the data and just use SMA for the first
> data point that they are looking at. You can control this in gt by
> giving a third parameter to EMA.
Can you give me an example of using the third parameter?
>> Both commands are absolutely equal because --last-record is a
>> synonymous for --nb-item=1
>
> This is correct. --last-record is just a shortcut to get the last data
> item only. It does NOT mean that we only compute the EMA based on 1 day.
It DOES mean that we compute EMA based on only 1 day.
Here is what --last-record does:
if ($last_record) {
$full = 0;
$start = '';
$end = '';
$nb_item = 1;
}
Best regards,
Nick
|
|
From: Thomas W. <we...@ms...> - 2010-07-22 18:05:32
|
Nick, --max-loaded-items is an optimization to avoid loading records from the data base that are not needed for analysis. It has no impact on analysis other than these records will not be available. Note that you have to be careful in using it, as the analysis might need more data than one thinks, due to dependencies of indicators, for example. It has no effect for text data. After the data is loaded (see Tools::find_calculator), the analysis range is determined. This is done based on --nb-item, --start, --end, and --full. The pod, I believe, captures what the code says. When you use these parameters in a script, they apply to the base market. They do NOT dictate how much other data is considered. For example, a 200 day indicator will load much more data, even if you say you just want the last 10 days of analysis. On 07/22/2010 04:01 PM, Nick Fantes Huege wrote: > Why would anyone want to use both --max-loaded-items and --nb-item? > The first one already intelligently limits the data pulled from the > database, and the second one further creates a subset from the already > limited data. This results in displaying wrong data in the example > when calculating EMA. > These two variables are totally different. The first is an optimization for data loading. The second controls what you are actually doing. It does not "create a subset from the already limited data". Regarding EMA, you have to be careful. Technically, EMA needs data from the first day of the market, as it goes back indefinitely. Most implementations of EMA limit the data and just use SMA for the first data point that they are looking at. You can control this in gt by giving a third parameter to EMA. > Just to clarify something: > > >> ./display_indicator.pl --nb-item=1 I:EMA YHOO 200 >> >> is wrong, because the --nb-item=1 will eliminate most input data, >> but the command >> >> ./display_indicator.pl --last-record I:EMA YHOO 200 >> >> is correct, because it correctly uses the default analysis range, >> but only outputs the last record. >> > Both commands are absolutely equal because --last-record is a > synonymous for --nb-item=1 > > This is correct. --last-record is just a shortcut to get the last data item only. It does NOT mean that we only compute the EMA based on 1 day. Th. |
|
From: Nick F. H. <nf...@gm...> - 2010-07-22 17:08:17
|
The code got altered by the mailing system. See this pastebin instead http://pastebin.com/H2arJxFU Nick |
|
From: Nick F. H. <nf...@gm...> - 2010-07-22 16:56:34
|
The constructor function 'new' for GT::Indicators attempts to
reference the variable $args in a slightly illegal way. To avoid an
interpreter error the author has added 'no strict "refs"' to the code.
To improve compatibility and to keep "strict" turned on at all times,
I have modified the sub as follows:
===== OLD VERSION =====
sub new {
my ($type, $args, $key, $func) = @_;
my $class = ref($type) || $type;
no strict "refs";
my $self = {};
if (defined($args)) {
if ( $#{$args} < $#{"$class\::DEFAULT_ARGS"} ) {
for (my $n=($#{$args}+1); $n<=$#{"$class\::DEFAULT_ARGS"}; $n++) {
push @{$args}, ${"$class\::DEFAULT_ARGS"}[$n];
}
}
$self->{'args'} = GT::ArgsTree->new(@{$args});
} elsif (defined (@{"$class\::DEFAULT_ARGS"})) {
$self->{'args'} = GT::ArgsTree->new(@{"$class\::DEFAULT_ARGS"});
} else {
$self->{'args'} = GT::ArgsTree->new(); # no args
}
if (defined($func)) {
die "We tried to pass a 'func' parameter to an indicator,
please convert the module...";
}
$self->{'func'} = sub { die "Please convert this module to NOT use
\$self->{'func'} ..."; };
return manage_object(\@{"$class\::NAMES"}, $self, $class,
$self->{'args'}, $key);
}
===== END OF OLD VERSION =====
===== NEW VERSION ====
sub new {
my ( $type, $args, $key, $func ) = @_;
my $class = ref($type) || $type;
my $self = {}; $args ||= [];
my @DEFAULTS; eval("\@DEFAULTS = \@$class\::DEFAULT_ARGS");
splice( @DEFAULTS, 0, scalar(@$args) );
push @$args, @DEFAULTS;
$self->{'args'} = GT::ArgsTree->new( @$args );
if ( defined($func) ) {
die "We tried to pass a 'func' parameter to an indicator,
please convert the module...";
}
$self->{'func'} = sub {
die "Please convert this module to NOT use \$self->{'func'} ..."
};
my @NAMES; eval("\@NAMES = \@$class\::NAMES");
return &manage_object( \@NAMES, $self, $class, $self->{'args'}, $key );
}
===== END OF NEW VERSION ====
Both versions of the sub work equally well. The new one is just a
little tighter and more compatible.
Regards,
Nick
|
|
From: Nick F. H. <nf...@gm...> - 2010-07-22 16:03:09
|
It seems that the root of our misunderstanding was that you read the pod and I read the Perl code. Since this is the development mailing list, I just wanted to point out something I thought was a logical error in the Perl code. There is already --max-loaded-items, which clearly states (and I have verified its claims by analyzing the Perl code) that it controls the number of periods (back from the last period) that are loaded for a given market from the data base. This option is effective only for certain data base modules and ignored otherwise. Why would anyone want to use both --max-loaded-items and --nb-item? The first one already intelligently limits the data pulled from the database, and the second one further creates a subset from the already limited data. This results in displaying wrong data in the example when calculating EMA. Just to clarify something: > ./display_indicator.pl --nb-item=1 I:EMA YHOO 200 > > is wrong, because the --nb-item=1 will eliminate most input data, > but the command > > ./display_indicator.pl --last-record I:EMA YHOO 200 > > is correct, because it correctly uses the default analysis range, > but only outputs the last record. Both commands are absolutely equal because --last-record is a synonymous for --nb-item=1 Best regards, Nick |