|
From: Christian S. <chr...@ep...> - 2003-08-11 12:21:51
|
Hi! I thought about adding mmap() support as a compile time option to the Akai and Giga libs. I would also recommend to place the sample head cache (for the first couple of sample chunks to allow instant triggering of the samples) directly into those libs, so the engine doesn't have to care about maintaining such things. What do you think? Regarding the Gigasampler lib: I have only finished the RIFF classes so far, as I'm waiting for the DLS2 specs. I always put the current version here: http://stud.fh-heilbronn.de/~cschoene/projects/libgig CU Christian |
|
From: Benno S. <be...@ga...> - 2003-08-11 13:44:08
|
Scrive Christian Schoenebeck <chr...@ep...>:
> Hi!
>
> I thought about adding mmap() support as a compile time option to the Akai
> and
> Giga libs. I would also recommend to place the sample head cache (for the
> first couple of sample chunks to allow instant triggering of the samples)
> directly into those libs, so the engine doesn't have to care about
> maintaining such things. What do you think?
>
Hmm not sure if this would be a good idea (that the lib handles the caching
of the first samples).
The fact is that the playback engine will need to know about the
cached size, offset where to switch from ram to disk based playback etc
anyway, plus the engine could decide to cache different amounts of samples
depending on some factors thus I think it is better if your lib just
interprets all chunks and returns data structures describing the data
contained in them.
As for the chunks containing the actual samples it is sufficient to
export the offset (relative to the file) where they begin and the total
length.
Basically linuxsampler should do the following
handle=libgig->parsefile("file.gig");
now the engine reads the structs exported by libgig
traverse_datastructs_of_parsed_chunks(handle);
// file can be closed since the engine will do it's own filehandling
// (could be via read() or mmap() thus IMHO it is better to let
// libgig close it's file handles
libgig->closefile(handle);
fd=engine->open("file.gig"); (regular posix open() call)
// the engine now has all data it needs to play the sample
// sample attributes data structs and sample offsets within files stored
// in GIG chunks accessible via libgig->handle...
sampler_playback_main_loop();
Is this ok for you Christian ?
cheers,
Benno
http://linuxsampler.sourceforge.net
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: Christian S. <chr...@ep...> - 2003-08-11 15:38:09
|
Es geschah am Montag, 11. August 2003 15:44 als Benno Senoner schrieb: > Scrive Christian Schoenebeck <chr...@ep...>: > > Hi! > > > > I thought about adding mmap() support as a compile time option to the > > Akai and > > Giga libs. I would also recommend to place the sample head cache (for the > > first couple of sample chunks to allow instant triggering of the samples) > > directly into those libs, so the engine doesn't have to care about > > maintaining such things. What do you think? > > Hmm not sure if this would be a good idea (that the lib handles the caching > of the first samples). > The fact is that the playback engine will need to know about the > cached size, offset where to switch from ram to disk based playback etc > anyway, plus the engine could decide to cache different amounts of samples > depending on some factors thus I think it is better if your lib just > interprets all chunks and returns data structures describing the data > contained in them. > As for the chunks containing the actual samples it is sufficient to > export the offset (relative to the file) where they begin and the total > length. Hmmm, not sure if that's a good idea either :) Because you may have to e.g. uncompress the wave stream (not with Akai, but at least with gig files). Of course you can also do that within the engine, but IMO that wouldn't be that clean and I just thought it would be some kind of code overhead to let the engine maintain sample specific infos. Regarding the amount of cached samples: It doesn't make sense to me not to cache all sample heads, at least all those samples which belong to a program (/instrumen) that's currently in usage have to be cached. Or maybe I just don't know what you're getting at. Christian |
|
From: Benno S. <be...@ga...> - 2003-08-11 21:19:43
|
Scrive Christian Schoenebeck <chr...@ep...>: > > Hmmm, not sure if that's a good idea either :) > Because you may have to e.g. uncompress the wave stream (not with Akai, but > at > least with gig files). Of course you can also do that within the engine, but > > IMO that wouldn't be that clean and I just thought it would be some kind of > code overhead to let the engine maintain sample specific infos. Ok I agree with you when we consider decompression of compressed formats. But (being an efficiency obsessed person by definition) I just wanted to make sure that the disk reading operations would be as fast as possible. For example in evo I avoid any copies and temporary buffers: the data from disk is read directly into a large ring buffer where the audio thread directly resamples the data from putting it into the output buffer (which is sent to sound card). So as long as your library gives us an efficient read(buffer, numsamples) function that in case of uncompressed samples just calls posix read() then I have nothing to object to let your library handle the I/O. Keep in mind we need lseek() too since the engine fills all the ringbuffers in chunks (eg max 128k samples read per read() call) thus we need to lseek() after each ringbuffer got refilled. (I use sorting to ensure that you refill the "least filled" buffer first, this is needed because sample rate of each voice can vary dinamically, and it actually works quite well). For compressed data (eg GIG compressed WAVs or MP3) I think we have no choice other than read a chunk of data into a temporary buffer and then decompress directly into the buffer supplied by the engine to the library's read() call. But keep in mind if you need to manage lots of disk streams (100 and more) you have to read at least 64k-128k samples at time in order to maximize disk bandwidth (otherwise you end up seeking around too much). So when you will design those decompression routines you need to take this into account (give the engine the possibility to tune these values (max read size) since we will probably need to change these values dinamically in certain situations (eg if all stream ringbuffers are very little filled we risk an audio dropout thus it is better to fill them up quickly with a bit of data rather than filling one single buffer with a large amount of data while the other buffers underrun. Anyway triggering a large library of MP3s with sub 5msec latency and streaming them from disk will be a bonus we will get for free since we abstract the decompression within the sample loading lib (as you said the engine will know nothing about the compression of the sample). > > Regarding the amount of cached samples: It doesn't make sense to me not to > cache all sample heads, at least all those samples which belong to a program Ok but probably almost samples belong to a program so this kind of optimization would not buy us too much. Anyway if you implement it even better so ! BTW: make a function where the engine can specify the amount of samples that are cached in RAM. That way the user (or the engine itself) can tune them to best suit the hardware characteristics / performance needs. (eg if your box is low of RAM make the cache value smaller, if you have plenty increase the value to give better stabilty and to alleviate the stress put on the disk). cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Benno S. <be...@ga...> - 2003-08-14 17:00:33
|
Scrive Christian Henz <ch...@gm...>: > > Ok I agree with you when we consider decompression of compressed formats. > > But (being an efficiency obsessed person by definition) I just wanted to > make > > sure that the disk reading operations would be as fast as possible. > > For example in evo I avoid any copies and temporary buffers: > > the data from disk is read directly into a large ring buffer where > > the audio thread directly resamples the data from putting it into > > the output buffer (which is sent to sound card). > > > > But for the real thing you'll need one buffer per voice anyway, right? Not necessarily: for example for a MIDI device you usually have an instrument assigned to each channel thus in presence of polyphonic instruments many voices get downmixed to the same channel. This means that if you do it right you can just mix the current sample output of each voice to to the channel buffer. So yes my former statement was a bit incorrect, you have more than one output buffers (mixdown buffers) which at the last stage get mixed down to the final soundcard audio out buffer. > > > So as long as your library gives us an efficient > > read(buffer, numsamples) function > > that in case of uncompressed samples just calls posix read() then I have > > nothing to object to let your library handle the I/O. > > Well, at some point the decoding/de-interleaving/resampling has to happen, so > why not in the respective loaders? Then you could have a unified concept of a > sample. Instead of reading raw data, the voices would call something like > sample->read(int channel, sample_t *voice_buffer, int num_samples) ok, but the library needs to supply uncompressed samples (linear block of samples) because the engine has to handle complex ring buffer issues, code that cannot be put within loaders. But the abstraction read(buffer, numsamples) (which internally does decompression, deinterleaving etc) is ok since it allows you to real any sample format you like. > > I've also been thinking about the case where you got several voices reading > from one (streamed) sample (like when you map a sample across several notes > or for polyphony on the same note). The sample start would of course be > buffered globally for all 'instances' of the streamed sample, but you'd also > need a mechanism of reading/buffering for each instance individually. I thought about the same but I came to the conclusion that if it is a large sample (eg let's say a sample that is 10-20MB long) you need to stream from different parts of the file anyway (you cannot keep MBs worth of cached sample just in the hope that it will be accessed at different positions simultaneously) so your method does not pay off. If the samples are short or the notes are short, then most of the time only the RAM cached part of the sample will be accessed requiring no disk accesses anyway or if there are any disk accesses the linux filesystem cache will cache the data for you thus disk accesses will be mostly avoided. (just play around with evo by loading large samples and hitting keys on your midi keyboard to convince yourself that this approach works) cheers, Benno. ------------------------------------------------- This mail sent through http://www.gardena.net |