You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(32) |
Oct
(144) |
Nov
(14) |
Dec
(44) |
| 2002 |
Jan
(16) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(65) |
Nov
(4) |
Dec
(30) |
| 2003 |
Jan
(84) |
Feb
(101) |
Mar
(58) |
Apr
(30) |
May
(138) |
Jun
(336) |
Jul
(36) |
Aug
(12) |
Sep
(8) |
Oct
(4) |
Nov
(12) |
Dec
(12) |
| 2004 |
Jan
(186) |
Feb
(274) |
Mar
(248) |
Apr
(18) |
May
(104) |
Jun
(48) |
Jul
(144) |
Aug
(98) |
Sep
(60) |
Oct
(72) |
Nov
(32) |
Dec
(130) |
| 2005 |
Jan
(84) |
Feb
(130) |
Mar
(50) |
Apr
(106) |
May
(240) |
Jun
(154) |
Jul
(66) |
Aug
(82) |
Sep
(36) |
Oct
(18) |
Nov
(14) |
Dec
(4) |
| 2006 |
Jan
(68) |
Feb
(2) |
Mar
(14) |
Apr
(6) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(50) |
Dec
(4) |
| 2007 |
Jan
(14) |
Feb
(42) |
Mar
(70) |
Apr
(30) |
May
(8) |
Jun
|
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(88) |
Nov
(168) |
Dec
(2) |
| 2008 |
Jan
(56) |
Feb
(372) |
Mar
(446) |
Apr
(112) |
May
(144) |
Jun
(94) |
Jul
(208) |
Aug
(90) |
Sep
(26) |
Oct
(10) |
Nov
(2) |
Dec
|
| 2009 |
Jan
|
Feb
(8) |
Mar
|
Apr
(46) |
May
(188) |
Jun
(120) |
Jul
(448) |
Aug
(202) |
Sep
(4) |
Oct
(72) |
Nov
(154) |
Dec
(2) |
| 2010 |
Jan
(58) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(68) |
Aug
(24) |
Sep
|
Oct
|
Nov
|
Dec
(11) |
| 2011 |
Jan
(6) |
Feb
(11) |
Mar
(8) |
Apr
(10) |
May
(4) |
Jun
|
Jul
|
Aug
(8) |
Sep
|
Oct
(3) |
Nov
(2) |
Dec
|
| 2012 |
Jan
|
Feb
(13) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(31) |
Aug
(21) |
Sep
(2) |
Oct
(1) |
Nov
(29) |
Dec
(17) |
| 2013 |
Jan
(2) |
Feb
|
Mar
|
Apr
(25) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(3) |
Oct
(4) |
Nov
(11) |
Dec
|
| 2016 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
|
From: Perl T. <per...@gm...> - 2009-07-18 17:27:19
|
R P Herrold wrote: > On Sat, 18 Jul 2009, Greg Jessup wrote: > >> I posted yesterday about starting a forum, did not hear from anyone. > > Forums are a 'pull medium' where you have to remember to go look, and > incur the network delays (and related scaling loads if popular) in > retrieving content. Mailing lists are 'push' and assuming you read in a > fast network relative to your mailstore, instantaneous. Yeah, that's exactly the reason I traditionally prefered mailing lists. But as I said, things are changing slowly through time. Lots od people today don't even know what mailing list is, but that doesn't stop them to be subscribed to a lot of forums. Forums are better for handling pictures and other additional content or you could say they just plainly look better than your typical mailing list archive. Also, where mailing lists are essentially flat (flow of mails through time), forums can be easily subdivided in topics and relevant discussions kept together. That makes forums better at handling high traffic, where mailing lists become a bottleneck. Finally, one of the reasons I'll spend some time picking the right forum software is because I would like seamless RSS support. Another nice technology which brings forums closer to the 'push medium', just like mailing lists are. It surely won't be instantenous, but it should be good enough. I just hope, at least RSS technology is adopted among internet pioneers, is it? ;) -- PerlTrader |
|
From: R P H. <he...@ow...> - 2009-07-18 16:58:35
|
On Sat, 18 Jul 2009, Greg Jessup wrote: > I posted yesterday about starting a forum, did not hear from anyone. Forums are a 'pull medium' where you have to remember to go look, and incur the network delays (and related scaling loads if popular) in retrieving content. Mailing lists are 'push' and assuming you read in a fast network relative to your mailstore, instantaneous. It is not likely that I would move to such. Thus I remained silent on what I consider a less good approach than the present one -- Russ herrold |
|
From: Perl T. <per...@gm...> - 2009-07-18 15:33:55
|
Greg Jessup wrote: > PerlTrader, > > I'd be happy to help with the forum Idea. I'm inclined to just leverage > google code or the google groups if possible. Less maintenance, but I'd > be happy to setup on my server as well. Email me directly [firstname > lastname all one word, at gmail] and we can chat offline about setting > this up. I really think it would help to expand GT and would be fine > putting the time into it. Actually, I don't need any help at this time. ;) 'Cause I already bought a domain, bought a dedicated server, installed and configured it. Now I only need to decide on a forum software, configure it and we're good to go. :) But thanks for landing a hand, of course. I might need help later, to decide on a miscellaneous little things like forum structure and so... > Would be nice to be able to share system files etc there a bit like > Tradery.com allows you to do. Absolutely, that is one of the core reasons I went with this little project. To separate core development issues from the nice ideas people might have. For example, Emily's system presented few weeks ago on this list looks very interesting and deserves one subtopic all for itself. Where people who are willing to play with it would hang. Until someone has a better trading system and attracts more people to discuss on their topic, right? ;) -- PerlTrader |
|
From: Perl T. <per...@gm...> - 2009-07-18 15:22:57
|
Robert A. Schmied wrote: > Perl Trader wrote: >> Greg Jessup wrote: >> >>> Any tip to get the majority of NYSE and NASDAQ into bean counter to >>> start off? I can get a list of all the symbols from >>> http://www.eoddata.com but would love to take the lazy way out and >>> get my db populated quickly. >>> >> >> I'm interested in something like this too. Nick, any possibility that >> you dump, compress and upload that database somewhere where we could >> get it? >> >> Unless actuall it is too much data for a typical DSL upload speed >> (even compressed), which I'm afraid it will be. >> >> I haven't used beancounter in production yet, although I installed the >> debian package and run it once, just for fun. But it immediately broke >> when I tried to pull data for just a dozen of US stocks. Quite ugly >> error, something to do with the database. Left me dissapointed a >> little bit. > > we might be able to assist with problem resolution, but will need a lot > more to work with ... what db engine, what command blew up, with what > exact results, etc That because I didn't ask for help. :) I got dissapointed and haven't started beancounter after that. But, here is what happens, FWIW: When I run setup_beancounter (default args) I finish with this: Creating beancounter database ** Running: createdb beancounter Creating beancounter database tables Verifying database access from Perl Config file /home/perltrader/.beancounterrc not found, ignored. Filling beancounter database tables with DJIA stocks ** Running: beancounter --dbsystem PostgreSQL --dbname beancounter addindex DJIA AA AXP T BA CAT C KO DIS DD EK XOM GE GM HPQ HD HON INTC IBM IP JNJ MCD MRK MSFT MMM JPM MO PG T UTX WMT Config file /home/perltrader/.beancounterrc not found, ignored. Filling beancounter (sample) portfolio ** Running: beancounter --dbsystem PostgreSQL --dbname beancounter addportfolio IBM:50:USD XOM:75:USD C:100:USD GOOG:25:USD Config file /home/perltrader/.beancounterrc not found, ignored. Filling beancounter with stock info and most recent prices for DJIA stocks ** Running: beancounter --dbsystem PostgreSQL --dbname beancounter addstock AA AXP T BA CAT C KO DIS DD EK XOM GE GM HPQ HD HON INTC IBM IP JNJ MCD MRK MSFT MMM JPM MO PG T UTX WMT ^GSPC Config file /home/perltrader/.beancounterrc not found, ignored. Ignoring symbol GM with unparseable date ERROR: null value in column "exchange" violates not-null constraint at /usr/share/perl5/Finance/BeanCounter.pm line 1404. That's where the setup procedure stops. beancounter doesn't like some data it gets from Yahoo, and it chokes on it. In the most simple scenario (default setup), if I may add. That doesn't look very robust to me, so I immediately stopped looking at beancounter as my preferred data storage choice. But I'll give it a second chance, it's just not high on my list of priorities. > people > > unless there's a really really really good reason for another database > (i think that there is, but i've yet to see a good description and > requirements > outline of it) > i'm concerned that all this 'personalized database' effort may be expended > on a private branch, that then leads to issues with adding updates and > fixes on the main branch. Hey, hey, it was never my intention to push it for the inclusion. My private data and my private thin layer to connect that data to GT is and will stay private. The beauty is that doing slightly different data connectors is almost trivial. And once the data gets prepared to the simple format that GT expects, there's nothing else to change in GT, it just works. We can't thank enough Perl DBI framework architects and developers for such simplicity and beauty. You take postgres.pm, change one connect string (e.g. Pg to Oracle) and you're now connecting to a completely different database. If you need, you tweak db query in the code and adapt it to your data schema. All those changes are trivial and cover 90% of situations. Of course, what is not trivial is doing data fetchers for stock data, but that doesn't seem to be part of GT, anyway (at the moment). For example, if the finance data you need is not in Yahoo, what do you do? beancounter is obviously not of much use, because it's specialized. The only way to solve that is to write your own data fetcher script, which is also not *that* hard, considering how Perl is good for data parsing & munging. That's me thinking loudly, but I must admit I won't spend much (if any) time on the data input layer. Mostly because it's obviously quite easy to write your own. The second reason being I'm not very good at architecting and coding things that need to be well standardized, but at the same time cover such a wide area. I like to optimize for my own case. :) > > pt -- i've not looked at postgres.pm, but i'm gonna guess it doesn't > provide > the front end that fetches the eod, nor does it have the features needed to Of course not, it expects a pre-populated database in a specific format to work. > manage the saved data. beancounter provides all that, plus the database > itself, it (bc) just needs some addons. at least that was the way i saw it > (still do) when i started out with bc and gt. (i probably had a bc db > before > i discoved gt, but that was before time) Yeah, and what if data is not on the Yahoo? What if I want to use Bloomberg, for example? What if I have a proprietary trading platform that is able to export the data for me (even intraday), will beancounter be able to use it in consistent manner? Those are hypothetical questions, I already know the answer. The Perl is the answer. :) > it's not good that there isn't a forum for beancounter users to post > candidate improvements I'll make sure there's a specific beancounter topic in the forum that I'm going to create, to fill that gap. Stay tuned. -- PerlTrader |
|
From: Weng K. L. <wen...@gm...> - 2009-07-18 14:58:48
|
I think that's a great idea. It'll also help keep track of open issues/bugs and feature to-do lists as well. An aside, I wonder if anyone here is using scan.pl, which among other scripts, uses IPC. I've had to manually comment out the IPC calls since I'm running my scripts on Win32 (sacrilege! I know... I don't have a linux machine at hand). I think it would be better to implement what IPC does using the more platform-independent threads module. Just a thought. I'll work on that if I have the time. Regards, Weng Khong On Sat, Jul 18, 2009 at 1:48 PM, Greg Jessup <gr...@gr...> wrote: > PerlTrader, > > I'd be happy to help with the forum Idea. I'm inclined to just leverage > google code or the google groups if possible. Less maintenance, but I'd be > happy to setup on my server as well. Email me directly [firstname lastname > all one word, at gmail] and we can chat offline about setting this up. I > really think it would help to expand GT and would be fine putting the time > into it. > > Would be nice to be able to share system files etc there a bit like > Tradery.com allows you to do. > > -Greg > > > > On Sat, Jul 18, 2009 at 8:37 AM, Perl Trader <per...@gm...> wrote: > >> Greg Jessup wrote: >> >>> Robert, I like the ideas here. >>> >>> > I posted yesterday about starting a forum, did not hear from anyone. >> If >> >>> we had something like this, it would allow people to put their ideas into >>> action and post changes. For example, once I write this script, I could post >>> it for you guys to grab, modify and improve. Then you can repost it. >>> I think a forum is much better and could combine both beancounter and >>> genius trader. Forums are much more searchable than this email thread. >>> Although I can go to google and use type. >>> >>> Now as far as storing all that extra data and extending beancounter. >>> Seems like some great ideas. In particular the earnings info and div dates. >>> Although, I agree, seems pointless to store divs daily. I am new to >>> beancounter just this week, but once I get fully up to speed I'd love to >>> contribute to extending it. >>> >> >> I like your idea Greg, sorry that I haven't responded. Although I said my >> opinion to ras, in a private email. Forums are slightly more suited to >> todays environment, would look better and offer some additional features >> that mailing list can't provide. There was a time long ago when I used >> exclusively mailing lists, and did not participate in any forum, but things >> are changing, and I accept forums now, too... :) >> >> Of course, it would be easy to classify topics on such a forum, such as >> geniustrader, beancounter, algorithmic/rule-base trading in general, perl in >> general, any other topic people are interested in. >> >> I think I can promise to put up such a forum in the near future, and then >> I'll post an announcement here. It was about time that GT gets a little more >> popularity, and putting another site that points to home (and another way of >> communication) is a one way to accomplish that. >> -- >> PerlTrader >> > > |
|
From: Greg J. <gr...@gr...> - 2009-07-18 14:49:21
|
PerlTrader, I'd be happy to help with the forum Idea. I'm inclined to just leverage google code or the google groups if possible. Less maintenance, but I'd be happy to setup on my server as well. Email me directly [firstname lastname all one word, at gmail] and we can chat offline about setting this up. I really think it would help to expand GT and would be fine putting the time into it. Would be nice to be able to share system files etc there a bit like Tradery.com allows you to do. -Greg On Sat, Jul 18, 2009 at 8:37 AM, Perl Trader <per...@gm...> wrote: > Greg Jessup wrote: > >> Robert, I like the ideas here. >> >> > I posted yesterday about starting a forum, did not hear from anyone. > If > >> we had something like this, it would allow people to put their ideas into >> action and post changes. For example, once I write this script, I could post >> it for you guys to grab, modify and improve. Then you can repost it. >> I think a forum is much better and could combine both beancounter and >> genius trader. Forums are much more searchable than this email thread. >> Although I can go to google and use type. >> >> Now as far as storing all that extra data and extending beancounter. Seems >> like some great ideas. In particular the earnings info and div dates. >> Although, I agree, seems pointless to store divs daily. I am new to >> beancounter just this week, but once I get fully up to speed I'd love to >> contribute to extending it. >> > > I like your idea Greg, sorry that I haven't responded. Although I said my > opinion to ras, in a private email. Forums are slightly more suited to > todays environment, would look better and offer some additional features > that mailing list can't provide. There was a time long ago when I used > exclusively mailing lists, and did not participate in any forum, but things > are changing, and I accept forums now, too... :) > > Of course, it would be easy to classify topics on such a forum, such as > geniustrader, beancounter, algorithmic/rule-base trading in general, perl in > general, any other topic people are interested in. > > I think I can promise to put up such a forum in the near future, and then > I'll post an announcement here. It was about time that GT gets a little more > popularity, and putting another site that points to home (and another way of > communication) is a one way to accomplish that. > -- > PerlTrader > |
|
From: Perl T. <per...@gm...> - 2009-07-18 14:38:57
|
Greg Jessup wrote: > Robert, I like the ideas here. > > I posted yesterday about starting a forum, did not hear from anyone. If > we had something like this, it would allow people to put their ideas > into action and post changes. For example, once I write this script, I > could post it for you guys to grab, modify and improve. Then you can > repost it. > I think a forum is much better and could combine both beancounter and > genius trader. Forums are much more searchable than this email thread. > Although I can go to google and use type. > > Now as far as storing all that extra data and extending beancounter. > Seems like some great ideas. In particular the earnings info and div > dates. Although, I agree, seems pointless to store divs daily. I am new > to beancounter just this week, but once I get fully up to speed I'd love > to contribute to extending it. I like your idea Greg, sorry that I haven't responded. Although I said my opinion to ras, in a private email. Forums are slightly more suited to todays environment, would look better and offer some additional features that mailing list can't provide. There was a time long ago when I used exclusively mailing lists, and did not participate in any forum, but things are changing, and I accept forums now, too... :) Of course, it would be easy to classify topics on such a forum, such as geniustrader, beancounter, algorithmic/rule-base trading in general, perl in general, any other topic people are interested in. I think I can promise to put up such a forum in the near future, and then I'll post an announcement here. It was about time that GT gets a little more popularity, and putting another site that points to home (and another way of communication) is a one way to accomplish that. -- PerlTrader |
|
From: Perl T. <per...@gm...> - 2009-07-18 14:26:00
|
Nick Kuechler wrote: > I forgot to mention my beancounter DB is mysql backed. If you're using > psql this will likely be a blocker for you. :P > Not at all. The only thing I need is to lay my hands on data in any form and/or database (all right, I can grok most of them, not all, but mysql is easy). Conversion of all sorts can and will be done. -- PerlTrader |
|
From: Perl T. <per...@gm...> - 2009-07-18 14:22:27
|
Nick Kuechler wrote: > PT, > > I can link you to my db tomorrow. I work at a datacenter and have > plenty of bandwidth available. > Thanks Nick, that would be great! -- PerlTrader |
|
From: Greg J. <gr...@gr...> - 2009-07-18 12:53:58
|
Robert, I like the ideas here. Let me start with my original request of populating the database with all symbols. My approach was to just grab a list of all NYSE and NASDAQ symbols from http://eoddata.com as this is the only place I have found where I can grab a csv file containing every symbol. If there were an easy way for people to do this, then using beancounter becomes even easier to begin using and maintain. Once all symbols are in, then we run "beancounter update" from cron and life is simple and easy. I posted yesterday about starting a forum, did not hear from anyone. If we had something like this, it would allow people to put their ideas into action and post changes. For example, once I write this script, I could post it for you guys to grab, modify and improve. Then you can repost it. I think a forum is much better and could combine both beancounter and genius trader. Forums are much more searchable than this email thread. Although I can go to google and use type. Now as far as storing all that extra data and extending beancounter. Seems like some great ideas. In particular the earnings info and div dates. Although, I agree, seems pointless to store divs daily. I am new to beancounter just this week, but once I get fully up to speed I'd love to contribute to extending it. -greg On Fri, Jul 17, 2009 at 9:10 PM, Robert A. Schmied <ra...@ac...> wrote: > Perl Trader wrote: > >> Greg Jessup wrote: >> >> Any tip to get the majority of NYSE and NASDAQ into bean counter to start >>> off? I can get a list of all the symbols from http://www.eoddata.com but >>> would love to take the lazy way out and get my db populated quickly. >>> >>> >> I'm interested in something like this too. Nick, any possibility that you >> dump, compress and upload that database somewhere where we could get it? >> >> Unless actuall it is too much data for a typical DSL upload speed (even >> compressed), which I'm afraid it will be. >> >> I haven't used beancounter in production yet, although I installed the >> debian package and run it once, just for fun. But it immediately broke when >> I tried to pull data for just a dozen of US stocks. Quite ugly error, >> something to do with the database. Left me dissapointed a little bit. >> > > we might be able to assist with problem resolution, but will need a lot > more to work with ... what db engine, what command blew up, with what > exact results, etc > > >> I'll probably finish with my own database and appropriate module for GT to >> access it. It's all too easy to accomplish that by modifying the existing >> postgres.pm. I'd just like to make a headstart, just like Greg, if it's >> possible, and get the most data in the start. Otherwise I'll have to blast >> at Yahoo and burn gigabytes as ras says. :) >> > > > people > > unless there's a really really really good reason for another database > (i think that there is, but i've yet to see a good description and > requirements > outline of it) > i'm concerned that all this 'personalized database' effort may be expended > on a private branch, that then leads to issues with adding updates and > fixes on the main branch. > > pt -- i've not looked at postgres.pm, but i'm gonna guess it doesn't > provide > the front end that fetches the eod, nor does it have the features needed to > manage the saved data. beancounter provides all that, plus the database > itself, it (bc) just needs some addons. at least that was the way i saw it > (still do) when i started out with bc and gt. (i probably had a bc db > before > i discoved gt, but that was before time) > > the stock beancounter collects a bunch of data and throws much of it away. > at the very least i'd like to have it (or maybe a super set of it) kept > (it doesn't have to be 'beancounter' any generalize application or bunch > of them working together would be fine) and then have gt module(s) > developed > that allow for access of any tuple (column) in the database. while we're at > it it might also be reasonable to plan a way for a single value tuple > rather than a column. > > it's not good that there isn't a forum for beancounter users to post > candidate improvements, etc, but there isn't, but that doesn't stop one > from building onto the basic beancounter database. the stock beancounter > application doesn't even need to be aware of the underlying db changes > provided they don't interfere -- here are some of hacks i've done ... > > working from memory stock beancounter (via Finance::YahooQuote) collects > > "snl1d1t1c1p2va2bapomwerr1dyj1x" > > (tangent: bc: Day's Range 'm' should be replaced with 'g' low and 'h' high, > similarly, 52-week Range 'w' should be replaced with 'j' low, and 'k' high. > because i've been getting more and more bad data from the combined codes, > so much so that i've added extra handling around the split when the data > is first parsed ... but the change isn't easy to isolate so i've left it > as is and work around bad data. missing data today is automagically > backpop'ed > tomorrow in my bc hack sanity checking, and in gt (via the sql fetch) > i take steps to filter as best i can price data so it's as consistent as > i can make it given o, h, l, c and change values on a given day. > end tangent) > > but only saves > symbol, name, exchange," . > " capitalisation, low_52weeks, high_52weeks, earnings," . > " dividend, p_e_ratio, avg_volume > in stockinfo > > and > symbol, date, previous_close, > day_open, day_low, day_high, day_close, > day_change, bid, ask, > volume > in stockprices > > > > i've added another table that keeps, on a daily basis > symbol, ex_div_date, div_pmt_date, annual_div, div_pct_yld > > (yea, i duplicate symbol instead of doing it right with a key to stockinfo) > > the data for ex_div_date, div_pmt_date is unreliable and usually in the > past. > > i have a table stockstats for > Column | Type | Modifiers > > --------------------+-----------------------+---------------------------------------- > symbol | character varying(12) | not null default ''::character > varying > indices | character varying(64) | not null default ''::character > varying > sector | character varying(64) | not null default ''::character > varying > industry | character varying(64) | not null default ''::character > varying > fiscal_year_end | date | most_recent_qtr | date > | shares_outstanding | bigint | float > | bigint | market_cap | bigint | > bookvalue | bigint | revenue | bigint > | ebitda | bigint | totalcash > | bigint | totaldebt | bigint | eps > | real | rps | real > | > but have yet to integrate that into my beancounter. > > > the end result of all this is to be able to provide additional data to > either > gt or other ta tool(s) in a "standard" way (in my case via sql engine) > > things i'd like to 'keep' but don't know how to 1) get for free > 2) store via sql database > > are quarterly type data (eps, earnings, revenue, sales etc) that > does not necessarily happen on any specific day, but can happen > on any day ... > > this stuff can be scrapped off yahoo (and other) web pages, but > that is risky, and error prone. > > some of the data might have to be derived on demand (computed) but > how to manage and maintain database integrity with things that have > only four (or two or twelve) entries per year? > > then there are the merger and acquisition headaches and other corporate > events like dividends (regular and special) and stock splits (both forward > and reverse) -- headache producers all > > > > where is all this going -- i dunno, just thought i'd throw it out there > and see if anyone wants to comment ... in summary > > > i) how to store stuff on timeframes span many days (without storing the > same (or very similar data every day) > ii) how to determine when one of this many day periods has elapsed and > the data needs to be collected and stored > iii) how to collect data not readily provided via Finance::YahooQuote > or equivalent and not pay much for it ... > iv) how to track stock splits and maintain consistent data > v) how to feed this new data to gt without breaking it (in other words > make no changes that require current users to upgrade both gt and > their database (without a very very very compelling benefit) > > > ras > > > > > > > |
|
From: Robert A. S. <ra...@ac...> - 2009-07-18 07:59:20
|
Nick Kuechler wrote: > I forgot to mention my beancounter DB is mysql backed. If you're using > psql this will likely be a blocker for you. :P nick and pt -- regardless of the backend on either side you should be able to dump nicks database in ascii sql format that can be imported by pts db engine. mysqldump(1) -- humm i don't see a mysql restore -- called something else i guess pg_dump(1) pg_dumpall(1) pg_restore(1) my first guess would be dump with complete insert statements. the data will be in sql ascii form and postgre will simply recreate the database by populating pts with that sql data file. naturally the (file) size will be substantially larger than the datastore since it will be the ascii sql form, but given adequate resources i see no show stoppers. make sure the receiving database tables are defined with compatible schema (names, types, sizes, etc). there are provisions to include the table schema in the dump as well, but don't go that way, because beancounter uses different schema between the mysql and postgresql backends. there's a slight possibility you might have to tweak some on pts end to get a clean restore, but it shouldn't be a problem so long as beancounters setup_beancounter and update_beancounter have already been run one time to create the database ... good luck ras > > > Best Regards, > Nick > -- > Nicholas Kuechler > > On Jul 17, 2009, at 5:58 PM, Perl Trader <per...@gm...> wrote: > >> Greg Jessup wrote: >> >>> Any tip to get the majority of NYSE and NASDAQ into bean counter to >>> start off? I can get a list of all the symbols from >>> http://www.eoddata.com but would love to take the lazy way out and >>> get my db populated quickly. >> >> >> I'm interested in something like this too. Nick, any possibility that >> you dump, compress and upload that database somewhere where we could >> get it? >> >> Unless actuall it is too much data for a typical DSL upload speed >> (even compressed), which I'm afraid it will be. >> >> I haven't used beancounter in production yet, although I installed >> the debian package and run it once, just for fun. But it immediately >> broke when I tried to pull data for just a dozen of US stocks. Quite >> ugly error, something to do with the database. Left me dissapointed a >> little bit. >> >> I'll probably finish with my own database and appropriate module for >> GT to access it. It's all too easy to accomplish that by modifying >> the existing postgres.pm. I'd just like to make a headstart, just >> like Greg, if it's possible, and get the most data in the start. >> Otherwise I'll have to blast at Yahoo and burn gigabytes as ras says. :) >> -- >> PerlTrader > > |
|
From: Nick K. <nku...@gm...> - 2009-07-18 06:36:50
|
I forgot to mention my beancounter DB is mysql backed. If you're using psql this will likely be a blocker for you. :P Best Regards, Nick -- Nicholas Kuechler On Jul 17, 2009, at 5:58 PM, Perl Trader <per...@gm...> wrote: > Greg Jessup wrote: >> Any tip to get the majority of NYSE and NASDAQ into bean counter to >> start off? I can get a list of all the symbols from http://www.eoddata.com >> but would love to take the lazy way out and get my db populated >> quickly. > > I'm interested in something like this too. Nick, any possibility > that you dump, compress and upload that database somewhere where we > could get it? > > Unless actuall it is too much data for a typical DSL upload speed > (even compressed), which I'm afraid it will be. > > I haven't used beancounter in production yet, although I installed > the debian package and run it once, just for fun. But it immediately > broke when I tried to pull data for just a dozen of US stocks. Quite > ugly error, something to do with the database. Left me dissapointed > a little bit. > > I'll probably finish with my own database and appropriate module for > GT to access it. It's all too easy to accomplish that by modifying > the existing postgres.pm. I'd just like to make a headstart, just > like Greg, if it's possible, and get the most data in the start. > Otherwise I'll have to blast at Yahoo and burn gigabytes as ras > says. :) > -- > PerlTrader |
|
From: Nick K. <nku...@gm...> - 2009-07-18 06:32:13
|
PT, I can link you to my db tomorrow. I work at a datacenter and have plenty of bandwidth available. Best Regards, Nick -- Nicholas Kuechler On Jul 17, 2009, at 5:58 PM, Perl Trader <per...@gm...> wrote: > Greg Jessup wrote: >> Any tip to get the majority of NYSE and NASDAQ into bean counter to >> start off? I can get a list of all the symbols from http://www.eoddata.com >> but would love to take the lazy way out and get my db populated >> quickly. > > I'm interested in something like this too. Nick, any possibility > that you dump, compress and upload that database somewhere where we > could get it? > > Unless actuall it is too much data for a typical DSL upload speed > (even compressed), which I'm afraid it will be. > > I haven't used beancounter in production yet, although I installed > the debian package and run it once, just for fun. But it immediately > broke when I tried to pull data for just a dozen of US stocks. Quite > ugly error, something to do with the database. Left me dissapointed > a little bit. > > I'll probably finish with my own database and appropriate module for > GT to access it. It's all too easy to accomplish that by modifying > the existing postgres.pm. I'd just like to make a headstart, just > like Greg, if it's possible, and get the most data in the start. > Otherwise I'll have to blast at Yahoo and burn gigabytes as ras > says. :) > -- > PerlTrader |
|
From: Nick K. <nku...@gm...> - 2009-07-18 06:30:24
|
I se psql 8.1 on linux at work with no issues. Haven't used sparc / solaris in ages, though. Best Regards, Nick -- Nicholas Kuechler On Jul 17, 2009, at 5:41 PM, "Robert A. Schmied" <ra...@ac...> wrote: > Nick Kuechler wrote: >> Yep I have crontabbed the beancounter update script. >> I use tha data for things besides GT so I need my own local copy >> of the data. >> ras: have you thought about turning postgresql's autovacuum feature >> on? > > probably not, although my version might predate that feature. will > have to > look around for docs on it. as an aside, way back ss10/20 days the > ross > processor mem cache was broken, and every full vacuum tended to > destroy > the database, so i may have disabled it and never reenabled. > > ras > >> Best Regards, >> Nick >> -- >> Nicholas Kuechler >> Contegix > << big snip >> |
|
From: Thomas W. <we...@ms...> - 2009-07-18 04:10:53
|
If you are running scans or backtesting over a large number of symbols, you will find that the constant pulling in of life data really adds up. Pulling data from Yahoo quotes is actually not that fast. What I would suggest is that you cache the market data once you have pulled it over. Set a parameter for how long you will consider the cache good (I would use one week for my application, others might choose 1 hour, it really depends what you are doing). Only pull new data when the cache expires. Watch the data format, though, and the ordering of data (as by default it comes over backwards). Cheers, Th. Greg Jessup wrote: > Thanks for the info robert. Fair point on the Yahoo abuse. > As a matter of fact, I was able to get the yahoo quote to work as my > db, and when you are scanning multiple symbols for a system, it goes > to Yahoo each time and pulls the quote universe for each symbol. This > is kind of nasty for IBM or JNJ who have data back to the 60s. > > |
|
From: R P H. <he...@ow...> - 2009-07-18 03:33:14
|
On Fri, 17 Jul 2009, Greg Jessup wrote: > Is there a module out there that will leverage Yahoo Quote History each time > for historical Data rather than using beancounter. Although beancounter is > easily configured, I find it annoying and a waste to maintain a database of > quotes when its very well maintained by yahoo. I think perhaps you miss the advantages which exist in the local cacheing of prior queries and inserts which beancounter provides. Speedier as well. The additional benefit of a defined result from a local dataset, rather the comings and goings of 'holes' in the results from a network retrieval, come to mind too. The beancounter corpus and code fill these holes. In checking just now, I find that I have 7162 symbols in my local beancounter database, and 18,592,720 EOD detail lines; it took 3.00 seconds to dump those 18 million lines across my local network from the database to my workstation. I rather doubt Yahoo across the internet can match that. --Russ herrold |
|
From: Robert A. S. <ra...@ac...> - 2009-07-18 03:11:46
|
Perl Trader wrote: > Greg Jessup wrote: > >> Any tip to get the majority of NYSE and NASDAQ into bean counter to >> start off? I can get a list of all the symbols from >> http://www.eoddata.com but would love to take the lazy way out and get >> my db populated quickly. >> > > I'm interested in something like this too. Nick, any possibility that > you dump, compress and upload that database somewhere where we could get > it? > > Unless actuall it is too much data for a typical DSL upload speed (even > compressed), which I'm afraid it will be. > > I haven't used beancounter in production yet, although I installed the > debian package and run it once, just for fun. But it immediately broke > when I tried to pull data for just a dozen of US stocks. Quite ugly > error, something to do with the database. Left me dissapointed a little > bit. we might be able to assist with problem resolution, but will need a lot more to work with ... what db engine, what command blew up, with what exact results, etc > > I'll probably finish with my own database and appropriate module for GT > to access it. It's all too easy to accomplish that by modifying the > existing postgres.pm. I'd just like to make a headstart, just like Greg, > if it's possible, and get the most data in the start. Otherwise I'll > have to blast at Yahoo and burn gigabytes as ras says. :) people unless there's a really really really good reason for another database (i think that there is, but i've yet to see a good description and requirements outline of it) i'm concerned that all this 'personalized database' effort may be expended on a private branch, that then leads to issues with adding updates and fixes on the main branch. pt -- i've not looked at postgres.pm, but i'm gonna guess it doesn't provide the front end that fetches the eod, nor does it have the features needed to manage the saved data. beancounter provides all that, plus the database itself, it (bc) just needs some addons. at least that was the way i saw it (still do) when i started out with bc and gt. (i probably had a bc db before i discoved gt, but that was before time) the stock beancounter collects a bunch of data and throws much of it away. at the very least i'd like to have it (or maybe a super set of it) kept (it doesn't have to be 'beancounter' any generalize application or bunch of them working together would be fine) and then have gt module(s) developed that allow for access of any tuple (column) in the database. while we're at it it might also be reasonable to plan a way for a single value tuple rather than a column. it's not good that there isn't a forum for beancounter users to post candidate improvements, etc, but there isn't, but that doesn't stop one from building onto the basic beancounter database. the stock beancounter application doesn't even need to be aware of the underlying db changes provided they don't interfere -- here are some of hacks i've done ... working from memory stock beancounter (via Finance::YahooQuote) collects "snl1d1t1c1p2va2bapomwerr1dyj1x" (tangent: bc: Day's Range 'm' should be replaced with 'g' low and 'h' high, similarly, 52-week Range 'w' should be replaced with 'j' low, and 'k' high. because i've been getting more and more bad data from the combined codes, so much so that i've added extra handling around the split when the data is first parsed ... but the change isn't easy to isolate so i've left it as is and work around bad data. missing data today is automagically backpop'ed tomorrow in my bc hack sanity checking, and in gt (via the sql fetch) i take steps to filter as best i can price data so it's as consistent as i can make it given o, h, l, c and change values on a given day. end tangent) but only saves symbol, name, exchange," . " capitalisation, low_52weeks, high_52weeks, earnings," . " dividend, p_e_ratio, avg_volume in stockinfo and symbol, date, previous_close, day_open, day_low, day_high, day_close, day_change, bid, ask, volume in stockprices i've added another table that keeps, on a daily basis symbol, ex_div_date, div_pmt_date, annual_div, div_pct_yld (yea, i duplicate symbol instead of doing it right with a key to stockinfo) the data for ex_div_date, div_pmt_date is unreliable and usually in the past. i have a table stockstats for Column | Type | Modifiers --------------------+-----------------------+---------------------------------------- symbol | character varying(12) | not null default ''::character varying indices | character varying(64) | not null default ''::character varying sector | character varying(64) | not null default ''::character varying industry | character varying(64) | not null default ''::character varying fiscal_year_end | date | most_recent_qtr | date | shares_outstanding | bigint | float | bigint | market_cap | bigint | bookvalue | bigint | revenue | bigint | ebitda | bigint | totalcash | bigint | totaldebt | bigint | eps | real | rps | real | but have yet to integrate that into my beancounter. the end result of all this is to be able to provide additional data to either gt or other ta tool(s) in a "standard" way (in my case via sql engine) things i'd like to 'keep' but don't know how to 1) get for free 2) store via sql database are quarterly type data (eps, earnings, revenue, sales etc) that does not necessarily happen on any specific day, but can happen on any day ... this stuff can be scrapped off yahoo (and other) web pages, but that is risky, and error prone. some of the data might have to be derived on demand (computed) but how to manage and maintain database integrity with things that have only four (or two or twelve) entries per year? then there are the merger and acquisition headaches and other corporate events like dividends (regular and special) and stock splits (both forward and reverse) -- headache producers all where is all this going -- i dunno, just thought i'd throw it out there and see if anyone wants to comment ... in summary i) how to store stuff on timeframes span many days (without storing the same (or very similar data every day) ii) how to determine when one of this many day periods has elapsed and the data needs to be collected and stored iii) how to collect data not readily provided via Finance::YahooQuote or equivalent and not pay much for it ... iv) how to track stock splits and maintain consistent data v) how to feed this new data to gt without breaking it (in other words make no changes that require current users to upgrade both gt and their database (without a very very very compelling benefit) ras |
|
From: Robert A. S. <ra...@ac...> - 2009-07-18 01:13:00
|
Greg Jessup wrote: > Any tip to get the majority of NYSE and NASDAQ into bean counter to start > off? I can get a list of all the symbols from http://www.eoddata.com but > would love to take the lazy way out and get my db populated quickly. > > thanks again ... > -greg > > > greg take the symbols list, chop it into reasonable sized groups, and pass the groups to beancounter. reasonable size is relative, to both my antique hardware and to be respectful to yahoo. but in any case they seem to want symbols chunked to less the a couple hundred (if memory serves) when getting eod. % beacounter addstock "group" then, with the group added, pass the group to beancounter again % beancounter backpopulate --prevdate $PRVDATE --date $LASTDATE "group" the hack provided is guaranteed to only work for me ;-0 and it requires the (sun) korn shell ras << snip >> |
|
From: Perl T. <per...@gm...> - 2009-07-18 01:00:16
|
Greg Jessup wrote: > Any tip to get the majority of NYSE and NASDAQ into bean counter to > start off? I can get a list of all the symbols from > http://www.eoddata.com but would love to take the lazy way out and get > my db populated quickly. > I'm interested in something like this too. Nick, any possibility that you dump, compress and upload that database somewhere where we could get it? Unless actuall it is too much data for a typical DSL upload speed (even compressed), which I'm afraid it will be. I haven't used beancounter in production yet, although I installed the debian package and run it once, just for fun. But it immediately broke when I tried to pull data for just a dozen of US stocks. Quite ugly error, something to do with the database. Left me dissapointed a little bit. I'll probably finish with my own database and appropriate module for GT to access it. It's all too easy to accomplish that by modifying the existing postgres.pm. I'd just like to make a headstart, just like Greg, if it's possible, and get the most data in the start. Otherwise I'll have to blast at Yahoo and burn gigabytes as ras says. :) -- PerlTrader |
|
From: Robert A. S. <ra...@ac...> - 2009-07-18 00:58:34
|
grep please watch the darned subject string -- it gets less useful when Re: start piling up ... Greg Jessup wrote: > Just out of curiosity how many symbols are you storing? And back how far > historically? $ psql beancounter \ -c"select symbol, name from stockinfo where active order by symbol;" ... (803 rows) these are only the ones being actively updated -- those deactivate are those that get acquired, or otherwise just go away. i have a wrapper that adds and backpops things of interest. currently it automatically uses 2003-12-29 as the previous date -- humm might be time to pull that forward a year now ... don't off-hand recall how to get the disk usage of the database ras << big snip >> |
|
From: Robert A. S. <ra...@ac...> - 2009-07-18 00:43:05
|
Nick Kuechler wrote: > Yep I have crontabbed the beancounter update script. > > I use tha data for things besides GT so I need my own local copy of the > data. > > ras: have you thought about turning postgresql's autovacuum feature on? probably not, although my version might predate that feature. will have to look around for docs on it. as an aside, way back ss10/20 days the ross processor mem cache was broken, and every full vacuum tended to destroy the database, so i may have disabled it and never reenabled. ras > > Best Regards, > Nick > -- > Nicholas Kuechler > Contegix > << big snip >> |
|
From: Greg J. <gr...@gr...> - 2009-07-18 00:42:43
|
Any tip to get the majority of NYSE and NASDAQ into bean counter to start off? I can get a list of all the symbols from http://www.eoddata.com but would love to take the lazy way out and get my db populated quickly. thanks again ... -greg On Fri, Jul 17, 2009 at 6:36 PM, Robert A. Schmied <ra...@ac...> wrote: > Greg Jessup wrote: > >> Thanks for the info robert. Fair point on the Yahoo abuse. >> As a matter of fact, I was able to get the yahoo quote to work as my db, >> and >> when you are scanning multiple symbols for a system, it goes to Yahoo each >> time and pulls the quote universe for each symbol. This is kind of nasty >> for >> IBM or JNJ who have data back to the 60s. >> >> I actually got beancounter up and running on without issue, I suppose it >> would be easy to write something to store the history. >> > > huh! so long as beancounter is up and running just do a market day cron > run and all the symbols will be updated -- it doesn't matter what engine > you are using, just so long as bc is working ... > > 43 13 * * 1-5 /your/path/to/beancounter/script/beancounter update > > >> How are you currently populating your sql database daily. Cron job which >> kicks off perl script? >> >> > 'tis truly a shame that dirk 'beancounter' doesn't have a way to 'share' > user hacks, because i think (majority of one anyway) some of my hacks are > useful ... > > anyway, i have a couple of cron entries that conspire to run both > beancounter > for daily eod price updates (only on market days) and to run various gt > script > apps, again only on market days. > > they are kludges, likely suited only for crontab on solaris sparc and for > the way i operate (think reverse polish) > > aloha > > ras > > > > -Greg >> >> >> >> On Fri, Jul 17, 2009 at 6:03 PM, Robert A. Schmied <ra...@ac...> wrote: >> >> >> Greg Jessup wrote: >>> >>> >>> Is there a module out there that will leverage Yahoo Quote History each >>>> time >>>> for historical Data rather than using beancounter. Although beancounter >>>> is >>>> easily configured, I find it annoying and a waste to maintain a database >>>> of >>>> quotes when its very well maintained by yahoo. >>>> >>>> I am currently editing the module I found here >>>> http://www.olfsworld.de/projects/gt/db/QuoteHist.pm ...but its taking >>>> more >>>> time than I'd like do to some oddities I am unaware of with timeframe. >>>> >>>> If anyone has anything, please either post or email it. >>>> >>>> Thanks, >>>> >>>> Greg >>>> >>>> >>>> >>> the kids today -- they've got nothing but bandwidth and gigabytes to burn >>> ... >>> >>> >>> greg >>> >>> i've not looked in detail at that, but oliver was possibly moving in that >>> direction: his GT::DB::CSV and GT::DB::HTTP might be what you're after, >>> but >>> from my reading of the gt users most are either using a flat file based >>> database or beancounter on a sql engine, all with a local data store. >>> >>> that's not to say an addon that fetches data on demand wouldn't be a nice >>> addition, but it would tend to abuse the free service providers like >>> yahoo >>> because you might be charting 100s of symbols a couple of times a market >>> day with different time frames. with a local store you only hit them >>> once for the eod and maintain it locally. if you pay for the service then >>> there is no abuse 'cause it's part of the service, but at some point >>> yahoo may actually have to throttle the free prices data and then >>> everyone >>> will suffer. >>> >>> i, like nick, use beancounter on postgesql and have few problems, but i >>> have tweaked bc a bit to do auto-updates plus some internal data sanity >>> checks. other additions should include periodic vacuuming, because >>> postgesql needs it, especially because of the way bc handles the >>> stockinfo >>> table. >>> >>> on the other hand you could use a flat file ascii csv like storage scheme >>> and not have to deal with a sql engine at all. >>> >>> that's my couple cents >>> >>> ras >>> >>> >>> >> >> > |
|
From: Nick K. <nku...@gm...> - 2009-07-18 00:40:50
|
EOD data for all US stocks for the last 10 years. It's not really much data. Best Regards, Nick -- Nicholas Kuechler On Jul 17, 2009, at 5:29 PM, Greg Jessup <gr...@gr...> wrote: > Just out of curiosity how many symbols are you storing? And back how > far historically? > > On Fri, Jul 17, 2009 at 6:24 PM, Nick Kuechler <nku...@gm...> > wrote: > Yep I have crontabbed the beancounter update script. > > I use tha data for things besides GT so I need my own local copy of > the data. > > ras: have you thought about turning postgresql's autovacuum feature > on? > > Best Regards, > Nick > -- > Nicholas Kuechler > Contegix > > On Jul 17, 2009, at 5:11 PM, Greg Jessup <gr...@gr...> wrote: > >> Thanks for the info robert. Fair point on the Yahoo abuse. >> As a matter of fact, I was able to get the yahoo quote to work as >> my db, and when you are scanning multiple symbols for a system, it >> goes to Yahoo each time and pulls the quote universe for each >> symbol. This is kind of nasty for IBM or JNJ who have data back to >> the 60s. >> >> I actually got beancounter up and running on without issue, I >> suppose it would be easy to write something to store the history. >> >> How are you currently populating your sql database daily. Cron job >> which kicks off perl script? >> >> -Greg >> >> >> On Fri, Jul 17, 2009 at 6:03 PM, Robert A. Schmied <ra...@ac...> >> wrote: >> Greg Jessup wrote: >> Is there a module out there that will leverage Yahoo Quote History >> each time >> for historical Data rather than using beancounter. Although >> beancounter is >> easily configured, I find it annoying and a waste to maintain a >> database of >> quotes when its very well maintained by yahoo. >> >> I am currently editing the module I found here >> http://www.olfsworld.de/projects/gt/db/QuoteHist.pm ...but its >> taking more >> time than I'd like do to some oddities I am unaware of with >> timeframe. >> >> If anyone has anything, please either post or email it. >> >> Thanks, >> >> Greg >> >> >> the kids today -- they've got nothing but bandwidth and gigabytes >> to burn ... >> >> >> greg >> >> i've not looked in detail at that, but oliver was possibly moving >> in that >> direction: his GT::DB::CSV and GT::DB::HTTP might be what you're >> after, but >> from my reading of the gt users most are either using a flat file >> based >> database or beancounter on a sql engine, all with a local data store. >> >> that's not to say an addon that fetches data on demand wouldn't be >> a nice >> addition, but it would tend to abuse the free service providers >> like yahoo >> because you might be charting 100s of symbols a couple of times a >> market >> day with different time frames. with a local store you only hit them >> once for the eod and maintain it locally. if you pay for the >> service then >> there is no abuse 'cause it's part of the service, but at some point >> yahoo may actually have to throttle the free prices data and then >> everyone >> will suffer. >> >> i, like nick, use beancounter on postgesql and have few problems, >> but i >> have tweaked bc a bit to do auto-updates plus some internal data >> sanity >> checks. other additions should include periodic vacuuming, because >> postgesql needs it, especially because of the way bc handles the >> stockinfo >> table. >> >> on the other hand you could use a flat file ascii csv like storage >> scheme >> and not have to deal with a sql engine at all. >> >> that's my couple cents >> >> ras >> >> > |
|
From: Robert A. S. <ra...@ac...> - 2009-07-18 00:38:02
|
Greg Jessup wrote: > Thanks for the info robert. Fair point on the Yahoo abuse. > As a matter of fact, I was able to get the yahoo quote to work as my db, and > when you are scanning multiple symbols for a system, it goes to Yahoo each > time and pulls the quote universe for each symbol. This is kind of nasty for > IBM or JNJ who have data back to the 60s. > > I actually got beancounter up and running on without issue, I suppose it > would be easy to write something to store the history. huh! so long as beancounter is up and running just do a market day cron run and all the symbols will be updated -- it doesn't matter what engine you are using, just so long as bc is working ... 43 13 * * 1-5 /your/path/to/beancounter/script/beancounter update > > How are you currently populating your sql database daily. Cron job which > kicks off perl script? > 'tis truly a shame that dirk 'beancounter' doesn't have a way to 'share' user hacks, because i think (majority of one anyway) some of my hacks are useful ... anyway, i have a couple of cron entries that conspire to run both beancounter for daily eod price updates (only on market days) and to run various gt script apps, again only on market days. they are kludges, likely suited only for crontab on solaris sparc and for the way i operate (think reverse polish) aloha ras > -Greg > > > > On Fri, Jul 17, 2009 at 6:03 PM, Robert A. Schmied <ra...@ac...> wrote: > > >>Greg Jessup wrote: >> >> >>>Is there a module out there that will leverage Yahoo Quote History each >>>time >>>for historical Data rather than using beancounter. Although beancounter is >>>easily configured, I find it annoying and a waste to maintain a database >>>of >>>quotes when its very well maintained by yahoo. >>> >>>I am currently editing the module I found here >>>http://www.olfsworld.de/projects/gt/db/QuoteHist.pm ...but its taking >>>more >>>time than I'd like do to some oddities I am unaware of with timeframe. >>> >>>If anyone has anything, please either post or email it. >>> >>>Thanks, >>> >>>Greg >>> >>> >> >>the kids today -- they've got nothing but bandwidth and gigabytes to burn >>... >> >> >>greg >> >>i've not looked in detail at that, but oliver was possibly moving in that >>direction: his GT::DB::CSV and GT::DB::HTTP might be what you're after, but >>from my reading of the gt users most are either using a flat file based >>database or beancounter on a sql engine, all with a local data store. >> >>that's not to say an addon that fetches data on demand wouldn't be a nice >>addition, but it would tend to abuse the free service providers like yahoo >>because you might be charting 100s of symbols a couple of times a market >>day with different time frames. with a local store you only hit them >>once for the eod and maintain it locally. if you pay for the service then >>there is no abuse 'cause it's part of the service, but at some point >>yahoo may actually have to throttle the free prices data and then everyone >>will suffer. >> >>i, like nick, use beancounter on postgesql and have few problems, but i >>have tweaked bc a bit to do auto-updates plus some internal data sanity >>checks. other additions should include periodic vacuuming, because >>postgesql needs it, especially because of the way bc handles the stockinfo >>table. >> >>on the other hand you could use a flat file ascii csv like storage scheme >>and not have to deal with a sql engine at all. >> >>that's my couple cents >> >>ras >> >> > > |
|
From: Greg J. <gr...@gr...> - 2009-07-18 00:30:25
|
Just out of curiosity how many symbols are you storing? And back how far historically? On Fri, Jul 17, 2009 at 6:24 PM, Nick Kuechler <nku...@gm...> wrote: > Yep I have crontabbed the beancounter update script. > > I use tha data for things besides GT so I need my own local copy of the > data. > > ras: have you thought about turning postgresql's autovacuum feature on? > > Best Regards, > Nick > --Nicholas Kuechler > Contegix > > On Jul 17, 2009, at 5:11 PM, Greg Jessup <gr...@gr...> wrote: > > Thanks for the info robert. Fair point on the Yahoo abuse. > As a matter of fact, I was able to get the yahoo quote to work as my db, > and when you are scanning multiple symbols for a system, it goes to Yahoo > each time and pulls the quote universe for each symbol. This is kind of > nasty for IBM or JNJ who have data back to the 60s. > > I actually got beancounter up and running on without issue, I suppose it > would be easy to write something to store the history. > > How are you currently populating your sql database daily. Cron job which > kicks off perl script? > > -Greg > > > > On Fri, Jul 17, 2009 at 6:03 PM, Robert A. Schmied < <ra...@ac...> > ra...@ac...> wrote: > >> Greg Jessup wrote: >> >>> Is there a module out there that will leverage Yahoo Quote History each >>> time >>> for historical Data rather than using beancounter. Although beancounter >>> is >>> easily configured, I find it annoying and a waste to maintain a database >>> of >>> quotes when its very well maintained by yahoo. >>> >>> I am currently editing the module I found here >>> <http://www.olfsworld.de/projects/gt/db/QuoteHist.pm> >>> http://www.olfsworld.de/projects/gt/db/QuoteHist.pm ...but its taking >>> more >>> time than I'd like do to some oddities I am unaware of with timeframe. >>> >>> If anyone has anything, please either post or email it. >>> >>> Thanks, >>> >>> Greg >>> >>> >> the kids today -- they've got nothing but bandwidth and gigabytes to burn >> ... >> >> >> greg >> >> i've not looked in detail at that, but oliver was possibly moving in that >> direction: his GT::DB::CSV and GT::DB::HTTP might be what you're after, >> but >> from my reading of the gt users most are either using a flat file based >> database or beancounter on a sql engine, all with a local data store. >> >> that's not to say an addon that fetches data on demand wouldn't be a nice >> addition, but it would tend to abuse the free service providers like yahoo >> because you might be charting 100s of symbols a couple of times a market >> day with different time frames. with a local store you only hit them >> once for the eod and maintain it locally. if you pay for the service then >> there is no abuse 'cause it's part of the service, but at some point >> yahoo may actually have to throttle the free prices data and then everyone >> will suffer. >> >> i, like nick, use beancounter on postgesql and have few problems, but i >> have tweaked bc a bit to do auto-updates plus some internal data sanity >> checks. other additions should include periodic vacuuming, because >> postgesql needs it, especially because of the way bc handles the stockinfo >> table. >> >> on the other hand you could use a flat file ascii csv like storage scheme >> and not have to deal with a sql engine at all. >> >> that's my couple cents >> >> ras >> >> > |