From: Mike W. <mw...@mi...> - 2002-09-20 19:54:42
|
I realize my question is largely a Postgres domain question, but would like to know if any PyPgSQL users are routinely manipulating data sets containing tens of millions of records. I currently use Postgres and PyPgSQL in a web content management application, and due to the relatively light loads on the database I don't have a good feel for how the tools will stack up when doing statistical processing. The application is a trend / membership / donation tracking application and will contain data for twenty million or so individuals, although the specific interactions that wll be reported on (trends and events affecting individuals) will probably touch a very small percentage of the total data collection. Just wondering if anyone has high level comments like "don't!" or "works great for us". The absence of horror stories will give me reason enough to do some performance testing on my own. Anecdotes and recipes for chocolate cake always welcome. Mike |