>> Based on Ken's comments about needing to load the file in anyway, it occurs
>> to me that maybe you could integrate the checksum code with the code that
>> loads the file in. If you read in the file in 64K chunks, you could checksum
>> each chunk. If you do this, benchmark it against the current version and dump
>> it if it isn't substantially faster (and maybe dump it anyway in the interest
>> of readability if it only takes 0.1s).
>
>> It has been a long while since I looked at mknbi, so this idea might not
>> come anywhere close to useful. But since I had it, I thought I would
>> share it. :^)
>
>Always useful. And one of the points of a public discussion.
Actually Don's suggestion is valid. If you look at mknbi, it copies the
data in 4kB chunks. So you could call the checksum routine for each
chunk, not have to do a substr because you know it's <= 4kB long,
and accumulate the checksum.
The trick will be how to insert the .note record. If it has to come
before the data, you may have to reserve the record in the output,
remember its offset, then come back and fill it in. If it comes after
the data, then there's no problem of course.
Hopefully all this can be done in Elf.pm if I got the OO design right,
otherwise the interface may need to be tweaked a bit.
|