tune the vector for random I/Os

2008-07-31
2013-04-25
  • Hi,

    I am implementing an algorithm where at some point I need to do many random I/Os within a vector. Is there a way to tune the vector in order to reduce the overhead?
    Each element in the vector consists of a couple integers and a static array. Currently I have the block size  (B) as small as possible, enough to fit the array, so that when doing a element access I don't read extra data. I also reduced the number of pages(2) and block_per_page(1), and that seems to have speed it up a little, however it still slow... which I know.... where are doing random I/Os, but I wonder if there is a way to improve it a little better.

    b.t.w I am using stxxl 1.1.0, I don't know if there are some improvements that might help me in the new update

    thanks

    -- Adan Cosgaya

     
    • Roger Dahl
      Roger Dahl
      2008-07-31

      Hi there,

      Is it possible for you to sort your vector to increase the locality of reference in your accesses? For instance, you might be able to add an index to your records and then sort them by your integers. When you're done with the accesses, you could sort them back to their original order using the indexes.

      Roger

       
    • Hi,

      I am afraid not, my vector is representing a level by level tree structure, and the traversal (accesses to elements) may follow any path.

      -- Adan

       
    • Roger Dahl
      Roger Dahl
      2008-07-31

      How about some type of "divide and conquer" strategy where you split your problem into parts that can be done in memory? For instance, if you are traversing the tree many times, maybe you can store part of the tree in memory, run through that part many times, store the results and then go on to the next part.

      You said that you needed to do random accesses but remember that your accesses are not truly random. There's some kind of method and pattern to the way you access your data. Keeping that in mind, maybe you can find some way to restructure your problem so that it can be solved with less "random" access.

      Roger

       
    • Hi Adan,

      to have a considerable speedup you should avoid random access (Roger is absolutely right). You already tuned the block/page sizes, this should help to reduce the cost of one vector cache miss. However, in your situation I recommend to increase the vector cache size to reduce the number of misses, now it is just 2 pages. Try (much) larger values.

      version 1.1.0 and 1.2.0 do have differences in vector performance. I recommend 1.2.0 since it is less buggy and there is also some improvements in performance of stxxl::sort and of some other components.

      Best regards,
      Roman