2013/2/27 Enno Borgsteede <ennoborg@gmail.com>
Hi Tim,
> I still don't understand how doing less in one transaction (making a
> separate transaction for each citation merge) can possibly be as fast.
> Surely data has to be really written at the end of each transaction,
> and writing to disc is slower than processing the accumulated
> transaction in memory?

A commit will indeed be flushed I believe, but I would not really worry about the question if many small ones will be slower than a single large. One could argue one way or the other. A power shut down will defenitely not take as long to recover with several small transactions.

I would design it mostly so that  a transaction makes sure the database is not in an inconsistent state, not that a large action is all done together or not. Only exception being import, where things become more tricky. Even then, a rollback is possible via db tools.


If there's a write cache, writing to disc may not be as slow as you
think, and I have no idea whether the accumulated transaction
information is in memory only. It may be written to some file on disc too.

And besides, my test was done with locks on, and maybe locks make
accumulating a transaction even worse.

Anyway, if a transaction is large enough, memory management alone may
make the process come to a halt some time. That's why I still think that
transactions better be as small as can be.



Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
Gramps-devel mailing list