|
From: <be...@ga...> - 2003-10-29 13:24:39
|
Scrive Mark Knecht <mar...@co...>: > > >From a performance point of view if you stream from a single file or a > bunch > > of WAV files it makes virtually no difference (even if you open and close > > the file at each triggering of the sample, it's very fast I did some > benchmarks > > and my conclusion is that the performance is exactly the same). > > I know nothing, well, less than nothing if possible, about disk > fragmentation on Linux, but wouldn't a collection of files be less > likely to be guaranteed to be collected together on the drive in a > group? > > If this is true, then wouldn't individual wave files be more likely to > have playback problems due to disk seek latencies for a single library? > > I completely get that if the drive is slow, *some* library will be > placed in the slow position, and that library would have problems, but > for debugging hardware problems and making your system work well, after > adding and deleting libraries for a year, wouldn't it be better to > somehow guarantee that all samples are collected together in a > contiguous disk area? Well the fact is that most of the wav files will reside in contiguous blocks. Of course the single WAV files could lie far apart but if you take a 2GB file, the first block and the last block are quite distant from each other, even if the whole file is residing in contiguous blocks. And what happens if you trigger two notes that cause the reading of one sample in the first part of the 2GB file and one in the last part ? The disk will have to seek back and forth like mad. I see no reason to organize files in a certain layout becaue they are usually so large that the contiguous blocks argument does not hold water. Ok of course if each 4KB block is spread randomly on the disk then performance will suck. But linux filesystems usually try to avoid this by putting large parts of a file in contiguous blocks. This ensures that the read performance is still good even on a full disk because the number of disk seeks/time is low compared to the amount of data read. I would not worry in any way about this. Of course there are some idiotic cases where fragmentation can slow down your disk performance, see here: http://lists.debian.org/debian-user/2003/debian-user-200302/msg02363.html I don't know if there exist a "defrag-like" util for Linux yet, but as stated in the link above, if you have really concerns you can backup all your data to another medium (HD, CDROM,tape etc), reformat you data disk and copy back the data again. Not sure if it is worth the trouble. Anyway think about how many variables come into play when you let LS play a complex MIDI file that streams many samplelibs from disk. The exact disk seek sequence cannot be figured out in advance because it depends from the task scheduling, sample pitches (higher pitched samples must be streamed faster thus need faster buffer refills), background tasks, characteristics, geometry of the disk etc. But as long as the filesystem is keeping large files in chunks of contiguous blocks of 128KB and more, LS performance will be perfectly fine and I guess virtually equal to a totally unfragmented disk. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |