From: Chris C. <ca...@al...> - 2002-02-02 00:46:09
|
Guillaume Laurent wrote: > Wow, that's impressive Test code attached. Compile it with -DMAKE_STRINGS and it actually constructs the strings (all those appends are slow too, so this is rather deceptive in terms of time taken to read the file); compile it without, and it just does some trivial integer arithmetic, giving a much more accurate idea of how long the file read takes. With optimisation, I get about 14s to read the file one byte at a time and 400ms to read it in chunks, if I don't build the strings. Note that the MAKE_STRINGS code does some stupid hack to ensure the correct things get appended when reading in chunks, even if nulls are encountered. Surely not the best way, but arguably we shouldn't be reading binary data into strings anyhow. (You'll need a /tmp/test.dat of at least 30 million bytes.) Andrew Sutton wrote: > it's simple math. a 30 meg > file is 3*(2^20) or 3145728 bytes. reading 1 byte at a time that's 3.1 > million function calls Well, you called it "simple". You're a zero out -- 30 meg is approx 30 million bytes and thus 30 million function calls, not 3 million. And with that many calls, you may even be right -- allow a few dozen clock cycles per function call and you already have several seconds to do 30 million of them on this (poxy) 300MHz machine. Chris |