From: Kal A. <ka...@te...> - 2003-02-07 15:01:07
|
Hi Rene, At 14:45 07/02/2003 +0100, Ren...@da... wrote: >Hi Kal, > >I just wanted to delete some topics from a topic map with 2000 Topics in it - >with the api function topic.destroy(). >In some cases I have to delete nearly all topics (I apply some filters on it) >and I woundered why I had to wait more then 10 minutes. >I took the time of a single destroy() and it was about 300 ms on a 700 Mhz >machine... Ouch! Is this with the in-memory backend or one of the persistent stores ? Without running any profiling, my guess is that this performance hit comes because the code checks that the topic you are removing is not being used as a topic/association/occurrence/role type or in a scope. Because that information needs to be looked up from the indexes, you are getting hit with the cost of the lookups. There are a number of ways this could be fixed: 1) Add a reference count to Topic objects - so a Topic can only be destroyed if its ref count is 0 2) Consolidate all topic use information into a single index - so the cost is only one lookup, not 5 (2) is probably easier to implement and will not involve changing the database schema, but (1) would be more efficient. In fact, I could probably implement (1) for the Ozone and in-memory backends and make the implementation of the getRefCount() method on the RDBMS backend use a query.... that would be almost the best of both worlds.... But before I get into making these changes, I would like to get a bit of profiling on your problem. If you have time to do that for me, that would be really helpful as I am kind of pressed for time at the moment. Cheers, Kal |