From: Enno B. <enn...@gm...> - 2014-04-09 13:49:31
|
Ron, > Not magically. The developers would have to rewrite Gramps to use the > multiprocessing module, and that will create the need to synchronize > tasks, pass all sorts of messages back and forth, handle even more > kinds of catastrophic errors, etc, etc. Not something that I see > non-University volunteers of a stable project doing any time soon. You'd be surprised about the number of university level developers here, I think. That's not to say that I know everyone's degree or working environment, nor that I plan to reveal what I know, but there are about a handful that I know, myself included, although I don't do more than hacking now. > Anyway, Gramps is a database program, not a computation program, so > the biggest gain would be from a multi-threaded db module, but even > that has choke points, since we don't want Gramps inserting garbage > into our trees, and the checking required doesn't come cheap. H'm, yes, multi-threading may give more performance when it allows to use more cores (I have 8), but I'm convinced that the real way to performance gain is redesign. On my i7, PAF can import 1 Million persons and associated data in about 11 minutes, using 1 core. The PAF import progress window suggests that it reads the GEDCOM once, writing individuals right away, and then writes families, sources, and repositories, followed by their connecting links, thereby avoiding random reads and writes to the database. Similar techniques could be applied to Gramps, and were already suggested in a thread referenced on https://www.gramps-project.org/wiki/index.php?title=GEPS_016:_Enhancing_Gramps_Processing_Speed And even though parts of Gramps are interpreted code, I don't think that that is a performance problem by itself, because the database is compiled code, and compiled Windows programs like FTM and Legacy prove to be slower than Gramps on GEDCOM import here. regards, Enno |