Hi. First of all I have to say that I'm quite impressed how nicely the whole thing works. I am currently simulating a system with >80 species and >130 reactions and it still needs almost no memory.
Still, I was wondering why you implemented the event queue as a binary heap. As far as I understood the operations executed most frequently are extracting the min, deleting updated subvolumes from the heap and reinserting the updated and resampled ones into the heap. For a binary tree those are all log(n) operations. Have you tried another structure such as a Fibonacci or Pairing heap. At least the insert could be sped up to O(1) this way and construction of the initial heap is also possible in O(1) instead of n*log(n). I don't know whether the speed up would be significant but if the whole thing becomes only twice as fast this would already be quite a gain, given the long simulation times. Another advantage would be the O(1) merging which could be used to parallelize the computation (if one could come up with a smart synchronization, since the diffusion makes it hard to split up).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi. First of all I have to say that I'm quite impressed how nicely the whole thing works. I am currently simulating a system with >80 species and >130 reactions and it still needs almost no memory.
Still, I was wondering why you implemented the event queue as a binary heap. As far as I understood the operations executed most frequently are extracting the min, deleting updated subvolumes from the heap and reinserting the updated and resampled ones into the heap. For a binary tree those are all log(n) operations. Have you tried another structure such as a Fibonacci or Pairing heap. At least the insert could be sped up to O(1) this way and construction of the initial heap is also possible in O(1) instead of n*log(n). I don't know whether the speed up would be significant but if the whole thing becomes only twice as fast this would already be quite a gain, given the long simulation times. Another advantage would be the O(1) merging which could be used to parallelize the computation (if one could come up with a smart synchronization, since the diffusion makes it hard to split up).