|
From: Benno S. <be...@ga...> - 2003-08-11 21:19:43
|
Scrive Christian Schoenebeck <chr...@ep...>: > > Hmmm, not sure if that's a good idea either :) > Because you may have to e.g. uncompress the wave stream (not with Akai, but > at > least with gig files). Of course you can also do that within the engine, but > > IMO that wouldn't be that clean and I just thought it would be some kind of > code overhead to let the engine maintain sample specific infos. Ok I agree with you when we consider decompression of compressed formats. But (being an efficiency obsessed person by definition) I just wanted to make sure that the disk reading operations would be as fast as possible. For example in evo I avoid any copies and temporary buffers: the data from disk is read directly into a large ring buffer where the audio thread directly resamples the data from putting it into the output buffer (which is sent to sound card). So as long as your library gives us an efficient read(buffer, numsamples) function that in case of uncompressed samples just calls posix read() then I have nothing to object to let your library handle the I/O. Keep in mind we need lseek() too since the engine fills all the ringbuffers in chunks (eg max 128k samples read per read() call) thus we need to lseek() after each ringbuffer got refilled. (I use sorting to ensure that you refill the "least filled" buffer first, this is needed because sample rate of each voice can vary dinamically, and it actually works quite well). For compressed data (eg GIG compressed WAVs or MP3) I think we have no choice other than read a chunk of data into a temporary buffer and then decompress directly into the buffer supplied by the engine to the library's read() call. But keep in mind if you need to manage lots of disk streams (100 and more) you have to read at least 64k-128k samples at time in order to maximize disk bandwidth (otherwise you end up seeking around too much). So when you will design those decompression routines you need to take this into account (give the engine the possibility to tune these values (max read size) since we will probably need to change these values dinamically in certain situations (eg if all stream ringbuffers are very little filled we risk an audio dropout thus it is better to fill them up quickly with a bit of data rather than filling one single buffer with a large amount of data while the other buffers underrun. Anyway triggering a large library of MP3s with sub 5msec latency and streaming them from disk will be a bonus we will get for free since we abstract the decompression within the sample loading lib (as you said the engine will know nothing about the compression of the sample). > > Regarding the amount of cached samples: It doesn't make sense to me not to > cache all sample heads, at least all those samples which belong to a program Ok but probably almost samples belong to a program so this kind of optimization would not buy us too much. Anyway if you implement it even better so ! BTW: make a function where the engine can specify the amount of samples that are cached in RAM. That way the user (or the engine itself) can tune them to best suit the hardware characteristics / performance needs. (eg if your box is low of RAM make the cache value smaller, if you have plenty increase the value to give better stabilty and to alleviate the stress put on the disk). cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |