|
From: David O. <da...@ol...> - 2004-01-12 16:14:00
|
On Monday 12 January 2004 16.14, Mark Knecht wrote: [...] > > BTW, priorities is what I use to keep sound effects from killing > > long notes in in-game music and stuff like that. Simple and > > handy, and works well as a generic improvement of the voice > > stealing system. You can't (ab)use it to make polyphonic patches > > monophonic, though. > > I think priorities could address my request. If the first X number > of voices for any gig were high priority, then the piano's 50th > note couldn't steal the violin's 1st note no matter how soft it > was. That should work - and it has the advantage of not having to figure=20 out some "sensible" number of voices to reserve for each gig, MIDI=20 port, MIDI channel or whatever. [...] > > Before I do that, though, I should probably benchmark the code > > and see what kind of CPU time ratio there is between the control > > stuff and the voice mixers... :-) > > I made this point once before, but I don't think anyone responded > to it. If I benchmark the number of voices used in GSt and LS to > render that jazz piano ogg I've posted then the release samples are > FAR more important from a CPU/disk usage point of view than the > main key strike notes. As expected... Those slow releases and stuff (whether they're just=20 looping + envelopes, or special samples) are the main reason why h/w=20 synths have 64-256 voices these days, I think. > (Active state? Note on state? What term should I > use?) I would suggest a separation of EG/voice states and MIDI terminology.=20 NoteOn, NoteOff and various ControlChange events are just ways of=20 suggesting what the EG should do. Depending on the patch, it may=20 react instantly, react based on what state it's in or ignore events=20 entirely. Nothing says that NoteOff is special. Lots of instruments=20 don't have a corresponding function, or releasing a keyboard key to=20 stop a note just doesn't feel right; it may make more sense to use=20 NoteOn only and use low velocities for muting notes. Whatever. Put in an abstraction layer between MIDI and the voice state machine,=20 and try to come up with terminology that doesn't seem to suggest that=20 some MIDI events directly affect the state of the EG. (I have some=20 cleaning up to do in that area in Audiality too, I think.) > I think that doing very much benchmarking before release samples > are included is only going to lead to doing it again. In the piano > ogg above, what you hear from LS today is only 10-15% of what GSt > is doing with the same MIDI file. Those 85-90% doesn't add *that* much to the experience, though! ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |