Re: [GD-General] Re: asset & document management
Brought to you by:
vexxed72
From: Neil S. <ne...@r0...> - 2003-05-19 16:13:40
|
> I'd think the time required to grab huge files from a server is comparable > to the time required to get the much smaller diffs and apply them to the > existing file. We've found that even complex stuff like LZ-like compression > makes disk accesses faster, not slower, because it's so fast to decompress > and still gives good space savings. So applying a diff is a doddle, surely? Loading compressed files is faster because the hard disk access is the limiting factor, not the cost of decompressing the data. Writing compressed files can also be faster, depending on the cost of the dictionary search and the resultant compression factor. Perforce does compress its files, suggesting that they found both reading and writing to be fast enough. When applying diffs, you are actually reading more from disk than the size of the ultimate file, not less, and the amount of extra disk access is dependent on how many diffs you will have to apply to get the required version of the file, so going further back will take more and more hard disk access and therfore be slower. Applying a few diffs onto an almost complete file should be pretty quick though, which is why I was suggesting having intermittent, complete copies of each file. It should be easy to find a tradeoff where both space savings and performance are good. - Neil. |