I was just experimenting some more, making a simple file system based on the example filesystem except just reading/writing etc. to another folder running reiserfs, so just a do-nothing fuse pass-through layer on top of a reiserfs. What I did was a 'dd if=/dev/zero of=/tmp/passthroughfs count=100000 bs=4K', and the problem now is that when the file reaches a size around 240MB on my machine the write suddenly blocks for quite a long time, quite many seconds. A dd for a smaller 'count', like 40000 4K blocks, will run fast but when the number of blocks get too big there is something slowing everything down.
As a further point, the dd for 100000 4K blocks took around 130s. to finish. whereas 4 times a 25000 4K blocks dd took like 30 seconds in total for all 4. (they were executed after one another, with around a 1 second delay inbetween them only.)
So can you tell me what the thing is about this threshold I experience? Is it because all the page cache in the kernel is full and things start to be flushed first at this point? And what I am writing is all in the internal kernel page cache until this point? But if it's the case, why does a normal 'dd' straight into the underlying reiserfs with the same parameters not suffer from this?