From: kghbln <med...@kg...> - 2016-08-18 14:41:55
|
Heiya James, > In regards to "consuming 668 MB of RAM", I'll be making a claim > (without prove) that in previous releases those numbers would most > likely be higher or worse. I believe that you are indeed right about this. Actually I was not unhappy with the performance to tell you the truth. The indications I provided were primarily meant as a real live benchmark example for a more complex wiki of the described size. Cheers Karsten Am 18.08.2016 um 16:32 schrieb James HK: > Hi, > >> took me 9 1/2 hours to do a full rebuild for about 440,000 object IDs >> (1,142,000 annotations) consuming 668 MB of RAM with everything on one >> server. > > In regards to "consuming 668 MB of RAM", I'll be making a claim > (without prove) that in previous releases those numbers would most > likely be higher or worse. Since 2.3 we are actively employing LRU [0] > as a mechanism to provide ceiling on in-memory cache instances. > > The reason as to why we are relying on caching intermediary results > should be immediately visible when looking at [1]. > > [0] https://en.wikipedia.org/wiki/Cache_algorithms#LRU > [1] https://github.com/SemanticMediaWiki/SemanticMediaWiki/issues/1799 > > Cheers > > On 8/18/16, kghbln <med...@kg...> wrote: >> Heiya Justin, >> >> indeed the "rebuildData.php" script does sometimes run a bit long. It >> all depends on the situation with your wiki. The more "semantic load" >> (number of properties used, number of annotations, number of queries) in >> other words complexity there is on a page the longer it takes. This adds >> up the more pages you have. >> >> I myself rarely do full rebuilds and for the "biggest" wiki I control >> which is a rather complex one with only a few thousand pages it indeed >> took me 9 1/2 hours to do a full rebuild for about 440,000 object IDs >> (1,142,000 annotations) consuming 668 MB of RAM with everything on one >> server. >> >> To me it sounds that it takes much longer than to be expected at your >> end. Also I have never ran into database deadlocks yet. So the server >> setup may be something to look at here too. Perhaps there is something >> in the water but I do not know what this could be exactly. >> >> Cheers Karsten >> >> Am 18.08.2016 um 00:47 schrieb Justin Lloyd: >>> Hi all, >>> >>> We have four SMW-based wikis, two of which have currently about 770,000 >>> properties and one with about 1.7 million. We’ve been rebuilding the >>> semantic data in them in hopes that doing so would resolve an issue we’ve >>> been seeing for a while, but the rebuilds take an insanely long time, on >>> the order of days, and the current rebuild of the largest wiki is >>> projected to take around a week (more as the rebuildData script keeps >>> getting interrupted by database deadlock errors). Does that sound normal >>> or is it possible I’m doing something wrong that is making it take far >>> longer than it should? >>> >>> Thanks, >>> Justin >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Semediawiki-user mailing list >>> Sem...@li... >>> https://lists.sourceforge.net/lists/listinfo/semediawiki-user >>> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Semediawiki-user mailing list >> Sem...@li... >> https://lists.sourceforge.net/lists/listinfo/semediawiki-user >> > |