From: <kt...@ri...> - 2015-08-12 13:09:57
|
On Wed, Aug 12, 2015 at 12:29:56PM +0200, Jean-Michel Pouré - GOOZE wrote: > Dear all, > > I would like to build a middle/big size SQLgrey database for testing, > in a situation of a middle-size ISP. > > My first settings show that for each user, 50 to 100 records are added > everyday to the database. Records are added and removed, so we need to > monitor insertion, deletion, query. > > If I consider 1 million users, 1 day to reconnect, it is a database of: > 1.000.000 x 50 = 50 million records added and deleted everyday. > > Is this the kind of size that SQLgrey is facing? If this is the case, > we probably need very special indexing (or no indexing at all). > > In your opinion, what is the size of a large SQLgrey database? > > Kind regards, > Jean-Michel Hi Jean-Michel, 50m adds/deletes = 100m transactions/day which works out to 1157 per sec. Even with a standard hard drive, with enough memory and with sync_commit off, this is easily manageable for a completely in-memory DB and it is not even large. Regards, Ken |