You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
|
From: David O. <da...@ol...> - 2003-01-21 17:31:49
|
On Tuesday 21 January 2003 18.08, Steve Harris wrote: > On Tue, Jan 21, 2003 at 04:30:18 +0100, David Olofson wrote: > > > A JACK app can connect directly to physical i/o, and can > > > connect to any ported part of another application. > > > > So can a XAP plugin - only it's actually the host that decides > > and makes the connection. After all, what you get is still an > > audio buffer that you're supposed to read or write once per block > > cycle. I still don't see why it would matter what API(s) are used > > to get access to the buffers. > > So, in other words, it can't ;) Right; it can't *make* the connection, but it can handle the=20 connection, once it's made. > The idea is that the sampler should > be able to connect to hardware inputs from its UI in order to > sample. From within XAP you cant do that. I hope. Its not really a > plugin UI feature and if it was present it would be bloat. Agreed. That said, the sampler being a XAP plugin doesn't prevent it's GUI=20 from being unaware of the difference between running as a XAP plugin=20 and running as a JACK client. When doing the latter, just make the=20 I/O selection features available. Nothing says that the GUI must talk=20 *only* to the sampler plugin when running as a JACK client. > > > The behaviour of a hardware sampler leads me to think of it > > > more like a jack application than a plugin. Thats not to say > > > that I dont think a sampler plugin is useful, obviously it is, > > > but I think a JACK sanmpler is more useful. > > > > What behavior are you referring to? Seriously, I want XAP plugins > > to behave as much like real hardware as is possible and > > desirable. I think there is a design issue with XAP if it can't > > host a sampler properly. > > XAP can host a sampler properly, its just not /ideal/. If it was > ideal it would be JACK. > > There are two use cases (it seems, from the windows world), > samplers as applications (gigasampler, ie. linuxsampler under > JACK), and as plugins (Halion and friends, ie linuxsample under > XAP). > > I think that the application model gives you more power and > control, but both are useful. Yes, I see what you mean now. However, I don't think the API used for=20 the RT part of the sampler matters. When you want to record, just=20 grab audio from one or more inputs. If the GUI knows you're running=20 as a JACK client, it can let the user connect the inputs. If not, the=20 user will have to do that with the plugin host. Where's the problem? //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: M. N. <ne...@us...> - 2003-01-21 17:20:15
|
Moi, > I think that sound fonts can map out a > portion of the keyboard, from looking at Swami, so would it be possible to > have a bass below middle C and a piano above middle C? Yes, that's possible. Take two instruments and assign the zones on preset level. > The technical issue with these libraries is that sometimes > a velocity of > 63 and 64 do not compare well with each other, so there are > adjustments made > in each sample set to get it all to work. You are going to find these > adjustments in a .gig file I'm sure. That's no problem. The SF2 standard allows completely independent parameters for each individual velocity / key zone. One could even use a modulator to make the transition gradual throughout the vel / key range of a sample (for example velocity to filter cutoff and amplitude). > The other one we need is 'key switching', where a range of > keys on the > keyboard are reserved as switches, not notes. When one of > these keys is > pressed, the complete sample set for all MIDI velocities > changes. I think > this one is easier to implement though. (Famous last > words...) You'll find > this in some of the .gig libraries, but possibly not on Worra's site. Famous last words indeed... That would mean adding new features to the SF2 format, to synth and editor. And then, why switch samples only? If I change to another sample, I'll probably also want to change filter, envelopes and so on. In case somebody is interested in the solution I'm using to get a similar result (with a control program 'wrapped' around iiwusynth): In iiwusynth there is a quite new feature, the so-called MIDI router. It can change (for example) the MIDI channel of received data, as in 'all data received on channels 4..7 goes to the synth on channel 0'. When I want to switch on-the-fly between different sounds, I assign them to different synth channels. To change sounds, I just upload a new router configuration (the router is smart enough to get pending 'noteoff' events right). For example: In state 1, all data goes to channel 0, in state 2 all data to channel 1. I use program change messages to switch between 'states' (instead of reserved keys). Together with the Ladspa Fx unit this also allows to change the effects setup. For example: Rhodes EP on synth channel 0 and 1, and a phaser inserted at the audio output of channel 0 only. This effectively switches the phaser on and off (what's best: held notes are unaffected, so you can hold a chord, switch the router setup, and continue playing with a different sound). If there is interest in the control program I'm using, let me know. But it's meant for live playing, not for sequencing, and far from ready-for-the-masses. Cheers Markus |
|
From: Mark K. <mk...@co...> - 2003-01-21 17:13:28
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of > David Olofson > Sent: Tuesday, January 21, 2003 7:50 AM > To: lin...@li... > Subject: Re: [Linuxsampler-devel] RE: Hi - Very quiet list - my first > post > > > On Tuesday 21 January 2003 15.51, Mark Knecht wrote: > [...] > > The other one we need is 'key switching', where a range of keys > > on the keyboard are reserved as switches, not notes. When one of > > these keys is pressed, the complete sample set for all MIDI > > velocities changes. I think this one is easier to implement though. > > (Famous last words...) You'll find this in some of the .gig > > libraries, but possibly not on Worra's site. > > That's a very interesting idea... (Especially if you have 88 keys. ;-) > > I have NRPN programmable CC->mixer control mapping (so you can hook > the mod wheel up to the auto-wah base cutoff or something), but I > never thought about mapping *keys*... Or poly pressure. :-) > Key switching is used very nicely in most GSt horn libraries today, as well as in the Scarbee Bass libraries. Here's an idea of what you get: (Key Switch Map from memory - definitely not correct) Key-map Sample C-3 Standard notes, long sustain D-3 Standard note, staccato E-3 Slide up to note F-3 Slide down to note G-3 Trills It's very powerful and allows a library developer to map lots of useful stuff into the library without taking up normal note space, and also not confusing beginning users. It's a bit cranky when you consider notation capabilities in programs like Rosegarden. None of them know that these notes are key switches. I've taken recently to moving key switch notes to a separate track that transmits on the same channel. Cheers, Mark |
|
From: Steve H. <S.W...@ec...> - 2003-01-21 17:09:31
|
On Tue, Jan 21, 2003 at 04:30:18 +0100, David Olofson wrote: > > A JACK app can connect directly to physical i/o, and can connect to > > any ported part of another application. > > So can a XAP plugin - only it's actually the host that decides and > makes the connection. After all, what you get is still an audio > buffer that you're supposed to read or write once per block cycle. I > still don't see why it would matter what API(s) are used to get > access to the buffers. So, in other words, it can't ;) The idea is that the sampler should be able to connect to hardware inputs from its UI in order to sample. From within XAP you cant do that. I hope. Its not really a plugin UI feature and if it was present it would be bloat. > > The behaviour of a hardware sampler leads me to think of it more > > like a jack application than a plugin. Thats not to say that I dont > > think a sampler plugin is useful, obviously it is, but I think a > > JACK sanmpler is more useful. > > What behavior are you referring to? Seriously, I want XAP plugins to > behave as much like real hardware as is possible and desirable. I > think there is a design issue with XAP if it can't host a sampler > properly. XAP can host a sampler properly, its just not /ideal/. If it was ideal it would be JACK. There are two use cases (it seems, from the windows world), samplers as applications (gigasampler, ie. linuxsampler under JACK), and as plugins (Halion and friends, ie linuxsample under XAP). I think that the application model gives you more power and control, but both are useful. - Steve |
|
From: David O. <da...@ol...> - 2003-01-21 15:50:03
|
On Tuesday 21 January 2003 15.51, Mark Knecht wrote: [...] > The other one we need is 'key switching', where a range of keys > on the keyboard are reserved as switches, not notes. When one of > these keys is pressed, the complete sample set for all MIDI > velocities changes. I think this one is easier to implement though. > (Famous last words...) You'll find this in some of the .gig > libraries, but possibly not on Worra's site. That's a very interesting idea... (Especially if you have 88 keys. ;-) I have NRPN programmable CC->mixer control mapping (so you can hook=20 the mod wheel up to the auto-wah base cutoff or something), but I=20 never thought about mapping *keys*... Or poly pressure. :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 15:45:57
|
On Tuesday 21 January 2003 13.41, M. Nentwig wrote: > Moi, > > I don't think that there are any restrictions concerning velocity > mapping with Swami. You can assign samples to arbitrary note and / > or velocity 'windows'. In theory it's possible to have a different > sample for each key / velocity combination (but I'd bet nobody has > tried that yet :) <plug qualifier=3D"shameless"> How about being able to write C-like code that calculates or=20 otherwise determines mapping when a note is started? Well, whether or not it's really useful, this is where Audiality is=20 going. Processing timestamped events in C is a bit hairy, so I'd=20 prefer using a custom higher level language for that. Another point=20 is that strapping on a scripting engine eliminates lots of hardcoded=20 logic, and the restrictions that come with it. </plug> //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 15:30:35
|
On Tuesday 21 January 2003 10.31, Steve Harris wrote: > On Tue, Jan 21, 2003 at 01:19:42 +0100, David Olofson wrote: > > > Oh, I see, thats not specific to JACK, it affects all > > > SCHED_FIFO programs. > > > > Yes, it's a general RT issue that just happens to impact JACK > > more than single RT thread apps. I wasn't very clear. > > No, it just shows up in jack because thats the only useful way to > run multiple SCHED_FIFO applications! Well, most people aren't using SCHED_FIFO for RT prototyping of RTAI=20 or RTL applications, so that's a point... :-) [...] > > > Nothing concrete, but I can imagine how hard it would be to get > > > it working reliably in LADSPA and this isnt really any > > > different. > > > > Well, I don't really see any problems beyond getting the > > sampler/butler interaction to work right. You still have an RT > > part (the "official" plugin) and one or more lower priority > > worker threads. > > Well, the butler /is/ the problem. Of course - it's a worker thread, and the plugin needs to communicate=20 with it without screwing other stuff up. Apart from obvious resource=20 sharing conflicts (signals, maybe), it seems to me that it would be=20 little more than a matter of picking a suitable priority for the=20 butler thread. What am I missing? > > > The JACK audio routing system is just more powerful than plugin > > > hosting. > > > > In what way? Isn't the only significant difference that the > > connections are made by clients rather than a host? > > A JACK app can connect directly to physical i/o, and can connect to > any ported part of another application. So can a XAP plugin - only it's actually the host that decides and=20 makes the connection. After all, what you get is still an audio=20 buffer that you're supposed to read or write once per block cycle. I=20 still don't see why it would matter what API(s) are used to get=20 access to the buffers. > A XAP instance is limited > to the connections that can be provided by the host application. Yes, and the way I see it, that's *intended*. When you're using a=20 host app, the host app is responsible for connections with the=20 outside world. (The only exception would be "driver plugins", which=20 interface the RT net with JACK, ALSA and other APIs.) If the host=20 doesn't allow the user to make the desired connections, the host is=20 broken and/or not the right tool for the job. =46rom the user POV, the only difference is that the JACK version would=20 integrate I/O selection in the LinuxSampler UI, while the XAP version=20 would rely on the host UI for that. Is that a problem? > The behaviour of a hardware sampler leads me to think of it more > like a jack application than a plugin. Thats not to say that I dont > think a sampler plugin is useful, obviously it is, but I think a > JACK sanmpler is more useful. What behavior are you referring to? Seriously, I want XAP plugins to=20 behave as much like real hardware as is possible and desirable. I=20 think there is a design issue with XAP if it can't host a sampler=20 properly. > > > Obviously you need that too, but for something as > > > potentially sophisticated as a sampler I'd really want it > > > available drectly to JACK. More layers of overheard and shims > > > would kinda defeat the point. > > > > Well, I have problems seeing where the overhead is, when you're > > still dealing with buffers of float32 samples all over the place, > > but I'm probably missing something. You'll need wrappers or other > > solutions in one direction or another, no matter what the lowest > > level API is - unless you support only one API, of course. > > Well as XAP and JACK are fundamentally callback based you can > provide common source with just the control handling that isn't > shared. Exactly. > It isn't neccesary to have all the XAP host cruft (VVIDs, > events, blah, blah) between jack and the sampler code. Right, but you still need a control interface. And it should probably=20 be sample accurate and support ramping, even if the first priority is=20 driving it from MIDI. If you use the ALSA sequencer API directly, you'll have to provide an=20 alternative interface anyway, since ALSA sequencer doesn't make much=20 sense for the XAP plugin. If you use your own custom control=20 interface, you need to wrap it with custom code for *both* JACK and=20 XAP. Using XAP, no extra work is needed for the XAP version (obviously),=20 and you could just use a standard or custom MIDI->XAP=20 driver/converter (which is one of the first things I'll implement, as=20 I still need MIDI to do anything useful) together with LinuxSampler=20 when running it as a JACK client. Using XAP as your "custom" API makes a lot of sense to me. If it's=20 too complex to be a viable solution for that, I'm suspecting that we=20 need to do some cleaning up. It's really supposed to be about as=20 clean and simple as possible, while providing what you need for=20 instrument control. If it's not suitable for a sampler, I think we=20 might be on the wrong track. What I'm saying is that XAP is *intended* for this kind of stuff. If=20 it doesn't fit, we'll have to make it fit, or there's just no point=20 in having it. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-21 14:52:23
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of Josh > Green > Sent: Tuesday, January 21, 2003 4:25 AM > > What kind of velocity functionality are you looking for? > Josh, I think Markus had it basically right in his follow up email. We need to be able to map multiple sample sets against a single note, or range of notes, but choose them based on what MIDI velocity is received. This is the way all of the good GSt libraries work. As an example only, for the piano libraries they may sample the piano at 4, 8 or 16 even different playing key pressures. Then the softest sample is mapped from a MIDI velocity of 0-31, the second from 32-63, third from 64-95, fourth from 96-127. Within each range the same sample is played, but the sample's audio volume is adjusted based on the velocity, so that a MIDI velocity of 93 plays louder than a velocity of 68, but they both play the same sample. The technical issue with these libraries is that sometimes a velocity of 63 and 64 do not compare well with each other, so there are adjustments made in each sample set to get it all to work. You are going to find these adjustments in a .gig file I'm sure. The other one we need is 'key switching', where a range of keys on the keyboard are reserved as switches, not notes. When one of these keys is pressed, the complete sample set for all MIDI velocities changes. I think this one is easier to implement though. (Famous last words...) You'll find this in some of the .gig libraries, but possibly not on Worra's site. Cheers, Mark |
|
From: M. N. <ne...@us...> - 2003-01-21 12:41:48
|
Moi, I don't think that there are any restrictions concerning velocity mapping with Swami. You can assign samples to arbitrary note and / or velocity 'windows'. In theory it's possible to have a different sample for each key / velocity combination (but I'd bet nobody has tried that yet :) -Markus |
|
From: Josh G. <jg...@us...> - 2003-01-21 12:25:38
|
On Mon, 2003-01-20 at 09:56, Mark Knecht wrote: > Josh, > Hi. Removed iiwusynth-devel as I am not a member of that list. > > I like what I see in Swami so far. It reminds me of parts of the > GigaStudio editing interface where you build your own .gig files. It > doesn't have all of the velocity mapping stuff, but most of it seems to > be there. > What kind of velocity functionality are you looking for? > The thing I see in Swami right now, and forgive me if I'm wrong about > this after just a short read through, is that it appears oriented to > dealing with samples and libraries, as opposed to being a MIDI-based > sample player. (I.e. - and instrument) It does look like it makes using > fluid-synth potentially easier, so I should pay attention to it for that > reason alone I suppose. > Currently the GUI is somewhat out-dated compared to the underlying API architecture. Its currently structured primarily as an editor. But I have many things on my list to add to make it a better front-end for FluidSynth (wavetable MIDI channel instrument selections, GUI MIDI controllers, session modulators, etc). As things are now you can start up FluidSynth within Swami using the alsa_seq MIDI driver and connect a sequencer to it (you can do this with FluidSynth stand alone as well). Also I'm going to be making the layout very dynamic and flexible. The GUI is already separated into self contained objects. I just need to create a GUI manager that allows creating new GUI objects, designating their layout (docked with other elements or in its own dialog, etc) and setting their view parameters. So you could for instance create multiple patch tree objects, a couple envelope editors, and define their layout and save and restore it. This will allow for different setups depending on if someone is editing, using it as a wavetable bank manager, sequencing with it, etc. > If you want to go look at Battery's interface, it's pretty clean and > to the point. > > http://www.nativeinstruments.de/index.php?id=battery_us > > It's really oriented towards playing short samples, but it can be > used for longer ones also. > Yeah, the interface looks nice. I already have a crude envelope editor in place. I haven't worked on it in a while, but I have been thinking about making it really flexible and allowing it to be overlayed on the sample waveform, allowing multiple parameters from different instruments to be edited simultainiously, etc. > BTW - I am NOT a Battery owner. I have used it very little. I was > just pointing out that as a sample playing app, it is a model that > people seem to get their hands around quickly. I also think it's ideal > for a first test vehicle for this sampler engine. > As far as Linuxsampler and Swami are concerned.. Linuxsampler may end up being a wavetable engine within Swami, as well as my libInstPatch library being used for accessing patch formats. This all remains to be seen though. > Maybe I'm taking some of the posts here in the wrong light, but yours > and Dave's both seem aimed at getting me to use something that currently > exists, where my idea was to find good uses for this new Linux-Sampler > engine. If I'm off base and people here don't think this is a good use, > then that's OK too. I was just speaking up since this list has been > 'very quiet', as the title in my thread indicates! > Sure.. You got things going again :) > Cheers, > Mark > Lates.. Josh Green |
|
From: Steve H. <S.W...@ec...> - 2003-01-21 09:32:32
|
On Tue, Jan 21, 2003 at 01:19:42 +0100, David Olofson wrote: > > Oh, I see, thats not specific to JACK, it affects all SCHED_FIFO > > programs. > > Yes, it's a general RT issue that just happens to impact JACK more > than single RT thread apps. I wasn't very clear. No, it just shows up in jack because thats the only useful way to run multiple SCHED_FIFO applications! > > The intention is that linuxsampler wil be inprocess > > anyway. > > But can you do that with JACK at this point? (Haven't checked lately.) Well yes, the alsa_driver is inprocess. There is some support for loading application inprocess at runtime, but its not 100%. > > Nothing concrete, but I can imagine how hard it would be to get it > > working reliably in LADSPA and this isnt really any different. > > Well, I don't really see any problems beyond getting the > sampler/butler interaction to work right. You still have an RT part > (the "official" plugin) and one or more lower priority worker threads. Well, the butler /is/ the problem. > > The JACK audio routing system is just more powerful than plugin > > hosting. > > In what way? Isn't the only significant difference that the > connections are made by clients rather than a host? A JACK app can connect directly to physical i/o, and can connect to any ported part of another application. A XAP instance is limited to the connections that can be provided by the host application. The behaviour of a hardware sampler leads me to think of it more like a jack application than a plugin. Thats not to say that I dont think a sampler plugin is useful, obviously it is, but I think a JACK sanmpler is more useful. > > Obviously you need that too, but for something as > > potentially sophisticated as a sampler I'd really want it available > > drectly to JACK. More layers of overheard and shims would kinda > > defeat the point. > > Well, I have problems seeing where the overhead is, when you're still > dealing with buffers of float32 samples all over the place, but I'm > probably missing something. You'll need wrappers or other solutions > in one direction or another, no matter what the lowest level API is - > unless you support only one API, of course. Well as XAP and JACK are fundamentally callback based you can provide common source with just the control handling that isn't shared. It isn't neccesary to have all the XAP host cruft (VVIDs, events, blah, blah) between jack and the sampler code. - Steve |
|
From: Mark K. <mar...@at...> - 2003-01-21 02:27:42
|
On Tue, 2003-01-21 at 01:35, Josh Green wrote: > > Swami currently only supports SoundFont files (I'm just now adding DLS2 > support). SoundFont uses 16 bit data only, DLS2 officially only supports > 8 or 16 bit data, but the format could in theory accomodate 24 bit or > other formats. They just wouldn't be portable (DLS2 does have support > for conditional proprietary stuff). > If you're interested in the .gig format, you can get lots of files at http://www.worrasplace.com He's got a nice selection of large and small ones. I know nothing of the internal format of the file, but I suspect that there are web sites out there with info. Cheers, Mark |
|
From: Mark K. <mar...@at...> - 2003-01-21 01:56:46
|
Josh, Hi. Removed iiwusynth-devel as I am not a member of that list. I like what I see in Swami so far. It reminds me of parts of the GigaStudio editing interface where you build your own .gig files. It doesn't have all of the velocity mapping stuff, but most of it seems to be there. The thing I see in Swami right now, and forgive me if I'm wrong about this after just a short read through, is that it appears oriented to dealing with samples and libraries, as opposed to being a MIDI-based sample player. (I.e. - and instrument) It does look like it makes using fluid-synth potentially easier, so I should pay attention to it for that reason alone I suppose. If you want to go look at Battery's interface, it's pretty clean and to the point. http://www.nativeinstruments.de/index.php?id=battery_us It's really oriented towards playing short samples, but it can be used for longer ones also. BTW - I am NOT a Battery owner. I have used it very little. I was just pointing out that as a sample playing app, it is a model that people seem to get their hands around quickly. I also think it's ideal for a first test vehicle for this sampler engine. Maybe I'm taking some of the posts here in the wrong light, but yours and Dave's both seem aimed at getting me to use something that currently exists, where my idea was to find good uses for this new Linux-Sampler engine. If I'm off base and people here don't think this is a good use, then that's OK too. I was just speaking up since this list has been 'very quiet', as the title in my thread indicates! Cheers, Mark On Tue, 2003-01-21 at 01:35, Josh Green wrote: > On Mon, 2003-01-20 at 13:42, Mark Knecht wrote: > > Josh, > > Hi. I'm not much of a fluid-synth user yet, but I'll outline the things I > > think I look to battery for: > > > > 1) An up front GUI that's pretty easy to see and understand (important for > > us 'command-line challenged' types!) > > Thats basically what Swami is (and much more really, its an entire API > framework for manipulating patch formats, with a GUI as one of the > interfaces).. It has a FluidSynth plugin to use it as its wavetable > synth engine (currently the only supported one, but I will be adding a > hardware EMU8k/10k plugin in the future). Of course Swami still needs a > bit of work to make it the best patch editor in the world :) > > > 2) Uses 16 & 24-bit wave files easily > > Swami currently only supports SoundFont files (I'm just now adding DLS2 > support). SoundFont uses 16 bit data only, DLS2 officially only supports > 8 or 16 bit data, but the format could in theory accomodate 24 bit or > other formats. They just wouldn't be portable (DLS2 does have support > for conditional proprietary stuff). > > > 3) Assigns specific samples to both specific MIDI notes AND channels. Has > > tuning, ADSR envelope plus limited plug in support for each note. > > Yes to everything.. Except that MIDI channel mapping is usually done by > selecting Bank:Preset pairs which are specific instruments or banks of > drums (not built into the format). Drums are traditionally only on MIDI > channel 10, although SoundFont does not restrict this. Plugin support > for each note? Not sure what you mean by that, but FluidSynth does have > a LADSPA host for adding LADSPA plugins to the synthesis output (no GUI > support for this yet though). > > > 4) Velocity support - maps velocity to different samples. (VERY important > > for using .gig files. Not typical of sound font based tools.) > > Yes.. Swami has this already. Each zone can have its own velocity range > which causes the sound to play, can also layer velocity/key range zones. > > > 5) Easy to mix samples from different sets to make a new set. > > > > Swami can open multiple patch files and easily copy samples/instruments > between them, if this is what you mean. > > > I would really be happy if Swami might start by reading .gig files and > > allowing me to export things like a kick from one set and a snare from > > another, save them and load them in a Battery like tool. > > > > I don't know much about .gig files, but I heard someone mention they are > based on DLS2. If this is the case it might be easy to add support for > them after DLS2 support is finished. > > > I hope this gives you some ideas. > > Sure.. A lot of this is already available though.. Have you tried > Swami/FluidSynth? The current CVS is GTK1.2 based and works with > FluidSynth CVS (at least last time I checked). Development is currently > happening on the CVS swami-1-0 branch, but it isn't operational quite > yet. > > > > > Cheers, > > Mark > > > > Cheers. > Josh Green > > > > ------------------------------------------------------- > This SF.NET email is sponsored by: > SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! > http://www.vasoftware.com > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: David O. <da...@ol...> - 2003-01-21 01:49:08
|
On Tuesday 21 January 2003 00.57, Mark Knecht wrote: > > > The problem of GS is that it is a > > > > [...] > > > > So, in short, GS is a Windows specific performance hack, while > > Halion is a plugin sampler Done Right - only on the wrong OS. > > Ah, but GS has the library that the other's wish they had. (And > that I've already invested a couple grand in, so let's not get > religious and go some other direction! There would be huge value in > being able to load GSt libraries into ANYTHING we do. Well, I was talking about software design - not file formats. Of=20 course, LinuxSampler should load and play *anything*! :-) > GSt crashes all the time when running on a PC with other apps. It's > actually pretty stable on it's own machine. That's to be expected when abusing an OS like that, it seems... And=20 that's why I gave up on audio programming on Windoze a few years ago. > > > MIDI sequencing and a sampler/synth engine on the same box is > > > not a problem since sequencing only takes a fraction of the > > > available resources. If you add HD recording to the equation, > > > then the workload increases significantly but nothing speaks > > > against of runnning both the HDR and the sampler software in > > > the same box. > > > > Except that they need separate disks, unless they share the disk > > butler, I think... Just adding another disk would probably be > > acceptable to most serious users, though. > > There's a lot going on these days on the sequencer side with > notation. I expect that I will run 2-3 computers to really do what > I want to do, but that's me. Well, I use two with Audiality, because I have yet to find a Linux=20 sequencer that I can compile, that does what I need, and that doesn't=20 get on my nerves. Still using Cakewalk on Windoze, that is. (Although=20 that's getting on my nerves as well - and not only for political=20 reasons! *heh*) > I can already bring my disks to their knees just running Ardour. I > doubt my current, sub-2GHz Athlon XP would run this sampler at the > level I push GSt, which is 10-15 stereo libraries and maybe 100 > voices sustained over time. Hans Zimmer, doing movie scores, has > talked of pushing multiple copies of GSt to the level of 300-500 > voices sustained. Those guys are using arrays of SCSI drives. It > can be a lot bigger than just another drive. Sounds like they have some serious seeking overhead there... I had 16=20 stereo tracks playing on a 5 GB 5400 rpm drive under Windoze 3.11, so=20 I know for a fact that it's possible to get at least 60% of the=20 nominal sustained rate of the drive. IIRC, I used 400 kB of buffering=20 per track for that, and you need to multiply that with the number of=20 tracks to keep the seeking overhead constant. That is, 800 kB for 32=20 tracks, or 8 MB (!) for 320 tracks. Yes, this is *per track* buffer=20 size. Indeed, SCSI disks are generally faster when it comes to access=20 times, but not *that* much faster. RAID arrays (with the same data on=20 all disks) only divide average access times by the number of drives,=20 or something like that. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Josh G. <jg...@us...> - 2003-01-21 01:36:21
|
On Mon, 2003-01-20 at 13:42, Mark Knecht wrote: > Josh, > Hi. I'm not much of a fluid-synth user yet, but I'll outline the things I > think I look to battery for: > > 1) An up front GUI that's pretty easy to see and understand (important for > us 'command-line challenged' types!) Thats basically what Swami is (and much more really, its an entire API framework for manipulating patch formats, with a GUI as one of the interfaces).. It has a FluidSynth plugin to use it as its wavetable synth engine (currently the only supported one, but I will be adding a hardware EMU8k/10k plugin in the future). Of course Swami still needs a bit of work to make it the best patch editor in the world :) > 2) Uses 16 & 24-bit wave files easily Swami currently only supports SoundFont files (I'm just now adding DLS2 support). SoundFont uses 16 bit data only, DLS2 officially only supports 8 or 16 bit data, but the format could in theory accomodate 24 bit or other formats. They just wouldn't be portable (DLS2 does have support for conditional proprietary stuff). > 3) Assigns specific samples to both specific MIDI notes AND channels. Has > tuning, ADSR envelope plus limited plug in support for each note. Yes to everything.. Except that MIDI channel mapping is usually done by selecting Bank:Preset pairs which are specific instruments or banks of drums (not built into the format). Drums are traditionally only on MIDI channel 10, although SoundFont does not restrict this. Plugin support for each note? Not sure what you mean by that, but FluidSynth does have a LADSPA host for adding LADSPA plugins to the synthesis output (no GUI support for this yet though). > 4) Velocity support - maps velocity to different samples. (VERY important > for using .gig files. Not typical of sound font based tools.) Yes.. Swami has this already. Each zone can have its own velocity range which causes the sound to play, can also layer velocity/key range zones. > 5) Easy to mix samples from different sets to make a new set. > Swami can open multiple patch files and easily copy samples/instruments between them, if this is what you mean. > I would really be happy if Swami might start by reading .gig files and > allowing me to export things like a kick from one set and a snare from > another, save them and load them in a Battery like tool. > I don't know much about .gig files, but I heard someone mention they are based on DLS2. If this is the case it might be easy to add support for them after DLS2 support is finished. > I hope this gives you some ideas. Sure.. A lot of this is already available though.. Have you tried Swami/FluidSynth? The current CVS is GTK1.2 based and works with FluidSynth CVS (at least last time I checked). Development is currently happening on the CVS swami-1-0 branch, but it isn't operational quite yet. > > Cheers, > Mark > Cheers. Josh Green |
|
From: David O. <da...@ol...> - 2003-01-21 01:35:23
|
On Tuesday 21 January 2003 00.48, Mark Knecht wrote: [...GUI...] > > It's not a top priority for me, since creating sounds from > > scratch is workable with a text editor, and loading WAVs is one > > trivial line of script per file. > > The value of GUI's early is marketing, and I don't think that's > important right now. That's a good point. Also, most serious users care more about quality=20 and features, so having something that can do the job is wort a lot=20 more than having something that looks cool, but doesn't cut it. > I actually do agree and believe that there's a lot value starting > with good scripts and testing the features BEFORE you invest in a > GUI. Yeah, you might actually get it *right*! :-) (And rebuilding GUIs=20 ain't all that fun...) > Let's go after that first. I'll go look more at what's there > in Audiality. My initial worry was that it was a sequencer, and > wanted to be a master and not an instrument. It contains a MIDI player, but that's it. It was originally meant for=20 playing SFX and music in a game - and I just can't stand MODs these=20 days, and didn't want to rely on SoundFonts, so I decided to use MIDI=20 + samples instead. Using the sequencer and the off-line synth is=20 optional, although you have to use either the C API or scripting (the=20 latter is definitely preferred most of the time) to load WAVs and=20 tweak patch parameters, if you have to. > If it can respond to > external MIDI, then it looks like it's got many of the other > interesting things today. Check - and it even maps the new mixer no NRPNs, so you can configure=20 routing and FXs the way you like directly from the sequencer. There's=20 nothing I hate as much as having to fiddle with all machines to=20 switch between songs and projects... Just making sure to play the=20 init beat of a song is so much smoother. :-) The only "serious" issue right now is that it's all integer/fixed=20 point processing; not float32. That means you have 16 bits per synth=20 voice (if even that) and 24 bit integer mixing. Maybe sufficient for=20 most work, but you can't make use of 24 bit samples as it is now. I intend to add full FP support soon, and I might drop the integer=20 support altogether. It's getting more and more pointless, considering=20 what will be "low end hardware" in a few years. Audiality can't=20 compete with a simple MOD player on Pentium and 486 CPUs anyway, and=20 I'm getting more and more bored with that awful integer code, special=20 handling of compilers w/o 64 bit int support and whatnot. On modern CPUs, the choice of algorithms impacts the CPU usage much=20 more than integer vs float, so for new games, Audiality will still be=20 a viable option - just don't use convolution based reverbs! ;-) Use=20 something else for Pentiums, or help maintain the integer support in=20 Audiality. :-) > I'm not clear about whether you, David, see this as a linux-sampler > project. It's an independent project which I've been working on for a little=20 more than a year, as part of a game. I don't know if it makes sense=20 merging the projects (different priorities, "Choice is Good" and all=20 that), but it's probably a good idea to share ideas, knowledge and=20 perhaps some code. BTW, lots of ideas and even som code for XAP comes from Audiality.=20 It's entirely possible that I'll split it into a XAP host and some=20 XAP plugins, given that it's basically structured like that=20 internally anyway; it's just not as formal as a plugin API, and=20 doesn't do dynamic loading. > I did, but it doesn't matter to me. Give me some > instructions and scripts and either way I'll go test it. Well, 0.1.0 lacks "waveform mapping", so you can't have more than one=20 waveform per patch. (Right; the demo songs use one channel for each=20 drum sound... *hehe*) You might want to wait for 0.1.1, but I'm not=20 sure when I'll release it. Could make a development snapshot, though, as it's back in a=20 functional state now. Most interesting news would be some docs, some=20 envelope generator bug fixes, a much more powerful mixer (with a nice=20 NRPN interface) and perhaps most interestingly to testers; a demo app=20 that loads all sounds I've=A0created so far, and sets the engine up as=20 a MIDI synth. All you have to do is hack the main loader script to=20 load your WAVs instead. > However, I > think there is value in having a linux-sampler project at this > scale. Something that could be put together rather quickly, tested > pretty easily, to get more people interested in the whole program > overall. With small samples it would run out of memory, but with > larger samples it would have to stream from disk. Getting to the > point where we could do that would be cool. I also think it would > help ring out bugs in the engine. > > As GUI's go I think that Battery's is pretty straight forward. They > have something like 50 boxes on the screen. Each box corresponds to > a specific MIDI note and channel. You just load a sound in each box > and go. Each box in Battery has an ADSR, some plugin capabilities > and other things, but to start we just make a matrix of 5x10 or so, > load some wave files and go. That would be a useful device unto > itself. The rest could come later when we've seen the underlying > sample playback engine working. Sounds like about a day's work to get the required features into=20 Audiality - but the GUI is more work. Maybe not too far from what I'm=20 going to implement anyway, though... The GUI editor I have in mind will basically be a hybrid between a=20 text editor and a modular synth editor. When it sees constructs it=20 understands, it displays them as graphical editors instead of text,=20 so you can edit envelopes, maps and other stuff in a smoother manner.=20 "Maps" is what rang a bell. What you're describing is basically a=20 mapping editor of sorts. A map is an object that selects things from=20 an array, based on some criteria. For example, you could hack a patch=20 script that to other patches, based on pitch. Each of those patches=20 could then map to different waveforms, based on velocity. In the most primitive form, you can just hack something that outputs=20 a script that sets up the mappings and EGs and loads the waveforms.=20 For something more sophisticated, make that an application with=20 access to Audiality's C API, so you can pass the generated script=20 directly, for instant response. > To clarify a couple of things: > > 1) It's a Jack app Planned, and shouldn't be too hard. > 2) It really should handle stereo wave files for samples from day 1 Audiality did stereo samples before it got the name. :-) It's really=20 just because the original sound FX samples for that game were in=20 stereo, but it's a nice feature. That said, I'll probably switch to mono internally (two voices for=20 stereo), since mono + stereo is yet another 2x cases for the voice=20 resamplers. (There are currently 40 of them, to support 8/16 x=20 mono/stereo and 5 quality modes... Lots of macro magic there! *heh*)=20 This is another relic from the days as a low end games SFX engine. > 3) Should either have a hard audio panner built into each box, or > better yet, respond to MIDI panning, Check. > volume Check. > and velocity events. Check. > Don't bother with multiple samples per box (velocity chooses them) > until we've seen it run. Well, I'll probably get that for free. I'm thinking about creating a=20 generic "mapper" object. Just give it a number and it hands you a=20 number back. Then use that to select patches or waveforms based on=20 whatever parameter you like; MIDI CCs, velocity, pitch or whatever. > I completely believe this could be script driven in the beginning, > and possibly stays that way under the hood even longer term. Yeah, that's kind of handy. Though I wouldn't recommend inventing a=20 custom language and implementing an interpreter for it, unless you=20 *really* need to. I did it because I'm interested in that kind of=20 stuff, and because I wanted a streamlined syntax that I like, and the=20 ability to run scripts reliably in RT context. Still work to do on=20 the latter, though; need to switch to bytecode first of all. Anyway, a big advantage with using scripts for the interface is that=20 you can change the way things work without hacking and recompiling=20 the GUI or the engine. That's really the main reason why I=20 implemented it in Audiality in the first place; I could edit and test=20 sounds and music in-game without even restarting the game. It's also cool stuff for "power users", who like to play around with=20 things not meant for ordinary users. :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-21 00:19:57
|
On Tuesday 21 January 2003 00.45, Steve Harris wrote: > On Mon, Jan 20, 2003 at 11:52:47 +0100, David Olofson wrote: > > > > =09- Linux still breaks JACK out-of-process latency... :-( > > > > Seems to be a scheduler issue, but I don't have first hand > > experience with this. I don't know if it's been fixed yet, but > > Paul have been complaining about it every now and then. > > Oh, I see, thats not specific to JACK, it affects all SCHED_FIFO > programs. Yes, it's a general RT issue that just happens to impact JACK more=20 than single RT thread apps. I wasn't very clear. > The intention is that linuxsampler wil be inprocess > anyway. But can you do that with JACK at this point? (Haven't checked lately.) > > > - Each point in the overall graph will be a different > > > instance > > > > Yes, but that applies to JACK and applications as well. I don't > > see what you mean. > > No, beacuse a JACK application can export ports to all other jack > applications, whereas a plugin is only available to its host app. Ah, I see. So, if the XAP plugin runs in a JACKified host, where's=20 the difference? Also, I don't see how using multiple instances could=20 change this. They would just be independent samplers, possibly=20 sharing the disk butler, if that runs as a separate process. > > > - Making the threading work well with all hosts will be > > > hard > > > > Possibly. Do you have some details on this? (Conflicts with what > > the host is doing or whatever.) > > Nothing concrete, but I can imagine how hard it would be to get it > working reliably in LADSPA and this isnt really any different. Well, I don't really see any problems beyond getting the=20 sampler/butler interaction to work right. You still have an RT part=20 (the "official" plugin) and one or more lower priority worker threads. > > > - No direct JACK i/o > > > > Why would you need that if you're a XAP plugin? It's the host > > that decides where your audio ports are connected. That's one of > > the few really significant differences between JACK and LADSPA, > > XAP, VST etc. > > The JACK audio routing system is just more powerful than plugin > hosting. In what way? Isn't the only significant difference that the=20 connections are made by clients rather than a host? It's still block based I/O. There is audio in your input buffers when=20 you get woken up or called, and there should be audio in your output=20 buffers before you signal back or return. You can have NULL buffers=20 and whatnot to optimize silence with both schemes, of course. > Obviously you need that too, but for something as > potentially sophisticated as a sampler I'd really want it available > drectly to JACK. More layers of overheard and shims would kinda > defeat the point. Well, I have problems seeing where the overhead is, when you're still=20 dealing with buffers of float32 samples all over the place, but I'm=20 probably missing something. You'll need wrappers or other solutions=20 in one direction or another, no matter what the lowest level API is -=20 unless you support only one API, of course. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-20 23:58:40
|
> > > > The problem of GS is that it is a > [...] > > So, in short, GS is a Windows specific performance hack, while Halion > is a plugin sampler Done Right - only on the wrong OS. > > Ah, but GS has the library that the other's wish they had. (And that I've already invested a couple grand in, so let's not get religious and go some other direction! There would be huge value in being able to load GSt libraries into ANYTHING we do. GSt crashes all the time when running on a PC with other apps. It's actually pretty stable on it's own machine. > > MIDI sequencing and a sampler/synth engine on the same box is not a > > problem since sequencing only takes a fraction of the available > > resources. If you add HD recording to the equation, then the > > workload increases significantly but nothing speaks against of > > runnning both the HDR and the sampler software in the same box. > > Except that they need separate disks, unless they share the disk > butler, I think... Just adding another disk would probably be > acceptable to most serious users, though. > There's a lot going on these days on the sequencer side with notation. I expect that I will run 2-3 computers to really do what I want to do, but that's me. I can already bring my disks to their knees just running Ardour. I doubt my current, sub-2GHz Athlon XP would run this sampler at the level I push GSt, which is 10-15 stereo libraries and maybe 100 voices sustained over time. Hans Zimmer, doing movie scores, has talked of pushing multiple copies of GSt to the level of 300-500 voices sustained. Those guys are using arrays of SCSI drives. It can be a lot bigger than just another drive. |
|
From: Mark K. <mk...@co...> - 2003-01-20 23:49:15
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of > David Olofson > Sent: Monday, January 20, 2003 3:06 PM > To: lin...@li... > Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post > > > My focus (in this specific conversation) really initiated from > > the idea that linux would benefit from have a simple app that > > starts from a base of audio samples, as opposed to a synth of some > > type, and knows enough about MIDI, envelopes and processing to be > > interesting and useful. > > I see. Reading your other post, it seems like most of what you've > mentioned is either implemented or high on the TODO list for > Audiality (obviously, since I'm actually going to *use* this myself > :-) - but I don't know when I'll get around to hack some form of GUI. > It's not a top priority for me, since creating sounds from scratch is > workable with a text editor, and loading WAVs is one trivial line of > script per file. The value of GUI's early is marketing, and I don't think that's important right now. I actually do agree and believe that there's a lot value starting with good scripts and testing the features BEFORE you invest in a GUI. Let's go after that first. I'll go look more at what's there in Audiality. My initial worry was that it was a sequencer, and wanted to be a master and not an instrument. If it can respond to external MIDI, then it looks like it's got many of the other interesting things today. I'm not clear about whether you, David, see this as a linux-sampler project. I did, but it doesn't matter to me. Give me some instructions and scripts and either way I'll go test it. However, I think there is value in having a linux-sampler project at this scale. Something that could be put together rather quickly, tested pretty easily, to get more people interested in the whole program overall. With small samples it would run out of memory, but with larger samples it would have to stream from disk. Getting to the point where we could do that would be cool. I also think it would help ring out bugs in the engine. As GUI's go I think that Battery's is pretty straight forward. They have something like 50 boxes on the screen. Each box corresponds to a specific MIDI note and channel. You just load a sound in each box and go. Each box in Battery has an ADSR, some plugin capabilities and other things, but to start we just make a matrix of 5x10 or so, load some wave files and go. That would be a useful device unto itself. The rest could come later when we've seen the underlying sample playback engine working. To clarify a couple of things: 1) It's a Jack app 2) It really should handle stereo wave files for samples from day 1 3) Should either have a hard audio panner built into each box, or better yet, respond to MIDI panning, volume and velocity events. Don't bother with multiple samples per box (velocity chooses them) until we've seen it run. I completely believe this could be script driven in the beginning, and possibly stays that way under the hood even longer term. |
|
From: Steve H. <S.W...@ec...> - 2003-01-20 23:46:26
|
On Mon, Jan 20, 2003 at 11:52:47 +0100, David Olofson wrote: > > > - Linux still breaks JACK out-of-process latency... :-( > > Seems to be a scheduler issue, but I don't have first hand experience > with this. I don't know if it's been fixed yet, but Paul have been > complaining about it every now and then. Oh, I see, thats not specific to JACK, it affects all SCHED_FIFO programs. The intention is that linuxsampler wil be inprocess anyway. > > - Each point in the overall graph will be a different > > instance > > Yes, but that applies to JACK and applications as well. I don't see > what you mean. No, beacuse a JACK application can export ports to all other jack applications, whereas a plugin is only available to its host app. > > - Making the threading work well with all hosts will be > > hard > > Possibly. Do you have some details on this? (Conflicts with what the > host is doing or whatever.) Nothing concrete, but I can imagine how hard it would be to get it working reliably in LADSPA and this isnt really any different. > > - No direct JACK i/o > > Why would you need that if you're a XAP plugin? It's the host that > decides where your audio ports are connected. That's one of the few > really significant differences between JACK and LADSPA, XAP, VST etc. The JACK audio routing system is just more powerful than plugin hosting. Obviously you need that too, but for something as potentially sophisticated as a sampler I'd really want it available drectly to JACK. More layers of overheard and shims would kinda defeat the point. - Steve |
|
From: David O. <da...@ol...> - 2003-01-20 23:06:01
|
On Monday 20 January 2003 22.52, Mark Knecht wrote: [...] > I don't know much of anything about Audiality, except for I have a > hard time spelling it! ;-) Yeah - weird name, isn't it? :-) > Really, the web page looks interesting, but I've never tried to use > it. Well, if you're command line challenged, as you say, you should=20 probably wait 'till there's a GUI for it. ;-) > My focus (in this specific conversation) really initiated from > the idea that linux would benefit from have a simple app that > starts from a base of audio samples, as opposed to a synth of some > type, and knows enough about MIDI, envelopes and processing to be > interesting and useful. I see. Reading your other post, it seems like most of what you've=20 mentioned is either implemented or high on the TODO list for=20 Audiality (obviously, since I'm actually going to *use* this myself=20 :-) - but I don't know when I'll get around to hack some form of GUI.=20 It's not a top priority for me, since creating sounds from scratch is=20 workable with a text editor, and loading WAVs is one trivial line of=20 script per file. > This is where I'm coming from on this specific day, which is not to > say I'm not interested in other things, but this is a > linux-sampler, after all. ;-) Yes - and it seems like "Battery emulation" would be a rather useful=20 starting point that still isn't way beyond reach. I'm afraid most of=20 the work is in the GUI programming domain, though. The parts that=20 deal with what you mentioned are really the trivial parts in=20 Audiality, and that's the case with most audio applications. Most of=20 the complexity in Audiality is stuff that synths generally don't=20 have; such as the fully configurable mixer, the off-line synth and=20 the script interpreter that drives it. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-20 22:53:03
|
On Monday 20 January 2003 22.41, Steve Harris wrote: > On Mon, Jan 20, 2003 at 10:16:30 +0100, David Olofson wrote: > > JACK client: > > ... > > > =09- Linux still breaks JACK out-of-process latency... :-( Seems to be a scheduler issue, but I don't have first hand experience=20 with this. I don't know if it's been fixed yet, but Paul have been=20 complaining about it every now and then. > > XAP plugin: > > =09+ You don't have to worry about audio I/O. > > =09+ There is a comprehensive instrument control protocol. > > =09+ Control and audio routing and transport is integrated. > > =09+ You're running in-process, which works well on any LL kernel. > > - Doesn't exist yet ;) That's a point! :-) > - Each point in the overall graph will be a different > instance Yes, but that applies to JACK and applications as well. I don't see=20 what you mean. > - Making the threading work well with all hosts will be > hard Possibly. Do you have some details on this? (Conflicts with what the=20 host is doing or whatever.) > - No way of receiving arbitary MIDI data (is that useful?) It's not useful for normal stuff, since you can map all RT MIDI=20 events to XAP events one way or another through a standard=20 driver/translator plugin. In fact, you can even pipe SysEx through=20 XAP (as data controls), so there might just not be a need for MIDI at=20 all; just a standard way of passing "non-standard" stuff to plugins,=20 if they want it. (You'd just have input ports for SysEx if you want=20 it.) > - No direct JACK i/o Why would you need that if you're a XAP plugin? It's the host that=20 decides where your audio ports are connected. That's one of the few=20 really significant differences between JACK and LADSPA, XAP, VST etc. > The obvious solution is to provide the services as both XAP and > JACK. Yes... Although I don't see why you couldn't provide your own=20 JACKified XAP host that automatically loads and hooks up=20 LinuxSampler. XAP would just be the native API of the synth - wrap as=20 you like, using standard or custom wrappers. Or you could just make the sampler a lib with whatever interface you=20 like, and then implement "wrappers" for JACK, XAP, VST or whatever.=20 Not sure I see the point in designing your own private API just for=20 that, though. I was going to do something like that with Audiality, but haven't=20 decided how to do it yet. I might just make it a XAP plugin, as XAP=20 and Audiality internals have lots in common anyway. There will still=20 be an easy to use "games sound engine" style API as well, but that=20 could be implemented as a specialized XAP host for Audiality (or=20 maybe even a "real" XAP host), rather than Audiality itself. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-01-20 22:23:41
|
On Monday 20 January 2003 19.54, Benno Senoner wrote: > Well, the fact that the engine is decoupled from the GUI means that > both solutions, a standalone sampler machine and the "full virtual > studio in one machine" are doable. Exactly - and using an API that handles both audio and sample=20 accurate control would make it easier to get right and more robust, I=20 think. > Regarding the stress that disk based sampling puts on the machine. > Yes it is a quite stressful application, but I don't think PCI is > the main bottleneck here. > Yes its peak performance is only 133MB/sec but as we know harddisks > usually transfer only 10-25MB/sec in the real world. Yes - but if you have a phatt array of drives, PCI won't cut it. I=20 guess that's why mid and high end servers, and even workstations come=20 with 64 bit PCI and stuff. > This means that you can have two separate disks one for HD > recording and the other for the sampler running in parallel without > interfering too much eachother. BTW, there's a problem if you need HDR and direct-from-disk sampling=20 at the same time. (Which I would all the time, if I used a disk=20 sampler, since I never record synth stuff to disk before vocals. I=20 like to have full control until the final mixdown.) Obviously, you can just use three disks; one for the system, one for=20 the sampler and one for the HDR. However, if you use more than one=20 disk sampler at a time, this starts getting out of hand... This is why I suggested a standard disk butler API on LAD - but I=20 think we need to do a lot more thinking and coding before turning=20 that into a standard. Maybe something useful will eventually take=20 form in the internals of LinuxSampler, so we can design a XAP=20 extension around that, rather than guessing. > The problem of GS is that it is a [...] So, in short, GS is a Windows specific performance hack, while Halion=20 is a plugin sampler Done Right - only on the wrong OS. > On the other hand Linux's multitasking works exceptionally well > and well designed realtime audio software can fully utilize the > machine's resources without compromising stability. Yes - although I think Paul has pointed out that there are still=20 issues with the scheduler when pushing it as hard as JACK does. IIRC,=20 it sometimes doesn't schedule the right process. Seems obvious that=20 this should be fixed in 2.5, but until then... > This is why, given enough horsepower, I support the idea of the > whole virtual studio in one single Linux box. =2E..and with 2 or more P-4 or better CPUs, preferably utilizing SIMD,=20 we're looking at some *serious* DSP power... > MIDI sequencing and a sampler/synth engine on the same box is not a > problem since sequencing only takes a fraction of the available > resources. If you add HD recording to the equation, then the > workload increases significantly but nothing speaks against of > runnning both the HDR and the sampler software in the same box. Except that they need separate disks, unless they share the disk=20 butler, I think... Just adding another disk would probably be=20 acceptable to most serious users, though. [...] > Regarding battery vs fully-fledged sampler: I agree, better to > start out with something simple and elaborate later, but if we get > this > "sampler-construction-kit" done, then evolving from the simple to > the "extended" version of the sampler will take only a small amount > of time since you will basically design it using your mouse and not > your editor and C compiler. > > :-) > > For example Juan says he prefers to first write hardcoded engines > and then start thinking about recompilation techniques but I see it > as a bit of a waste of time. I prefer to take a longer > design phase and write an engine that can later scale really high > without the limitations of hardcoded engines. Well, as long as you actually get through the design phase without=20 killing the project, spending more time on the design sounds like a=20 good idea. Saves time in the long run, provided you think straight=20 and know what you're doing. That said, I did it the other way around with Audiality. (Which is=20 effectively a working test bed for my XAP ideas, BTW.) It's also the=20 first major spare time project of mine that's really off the ground.=20 It's been more or less fully functional right from the start=20 (although it didn't do much :-), almost 2 MB of code ago. I've just=20 added features as I needed them (ehrm, or just thought they would be=20 fun to play with), and restructured when things started getting too=20 messy. Note that this restructuring part is *very* important, but usually=20 really rather boring. I believe the only reason why Audiality isn't a=20 total heap of spaghetti is that I have trouble remembering complex=20 APIs and logic. If it's to messy, I just can't work with it and must=20 fix it. Obviously, there's a great deal of rewriting and moving stuff around=20 in this process, but that's not all bad. I've actually *tried* the=20 solutions that I ruled out, and when I get a new "great idea", I can=20 just test it and see if it actually works. When it comes to design,=20 testing ideas in a real app is a lot more useful than testing in a=20 fake environment. Often, you realize where you went wrong as soon as=20 you start typing the test code. What I'm saying is that the "hack away" approach may not be the most=20 effective way of creating software, but it sure beats never getting=20 off the ground. I think this all boils down to "It's more fun to hack=20 something that sort of works." If it doesn't compile, run and do=20 something "interesting" every 10-20 hours of hacking or so, you're in=20 trouble... Can't tell you what to do, but given my experience, I would say Juan=20 has a point. Though, it seems to me that rudimentary C code=20 generation shouldn't have to be *that* complicated. You could=20 probably fake most of it, and basically just have the engine output,=20 compile and load a hand coded voice struct, until you have the real=20 stuff in place. //David Olofson - Programmer, Composer, Open Source Advocate =2E- The Return of Audiality! --------------------------------. | Free/Open Source Audio Engine for use in Games or Studio. | | RT and off-line synth. Scripting. Sample accurate timing. | `---------------------------> http://olofson.net/audiality -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mk...@co...> - 2003-01-20 21:54:10
|
> -----Original Message----- > From: lin...@li... > [mailto:lin...@li...]On Behalf Of > David Olofson > Sent: Monday, January 20, 2003 10:33 AM > To: lin...@li... > Subject: Re: [Linuxsampler-devel] Hi - Very quiet list - my first post > > > Well, I'm not very familiar with Battery, but maybe Audiality will be > able to do what you want rather easily? It's basically a sampler with > an integrated mixer. Although the primary source of waveforms is a > script driven off-line synth (modular synth style), it can load and > process audio files as well. (Which is a feature that remains from > Audiality's days as a games sound FX engine.) > I don't know much of anything about Audiality, except for I have a hard time spelling it! ;-) Really, the web page looks interesting, but I've never tried to use it. My focus (in this specific conversation) really initiated from the idea that linux would benefit from have a simple app that starts from a base of audio samples, as opposed to a synth of some type, and knows enough about MIDI, envelopes and processing to be interesting and useful. This is where I'm coming from on this specific day, which is not to say I'm not interested in other things, but this is a linux-sampler, after all. ;-) |
|
From: Mark K. <mk...@co...> - 2003-01-20 21:43:16
|
Josh, Hi. I'm not much of a fluid-synth user yet, but I'll outline the things I think I look to battery for: 1) An up front GUI that's pretty easy to see and understand (important for us 'command-line challenged' types!) 2) Uses 16 & 24-bit wave files easily 3) Assigns specific samples to both specific MIDI notes AND channels. Has tuning, ADSR envelope plus limited plug in support for each note. 4) Velocity support - maps velocity to different samples. (VERY important for using .gig files. Not typical of sound font based tools.) 5) Easy to mix samples from different sets to make a new set. I would really be happy if Swami might start by reading .gig files and allowing me to export things like a kick from one set and a snare from another, save them and load them in a Battery like tool. I hope this gives you some ideas. Cheers, Mark > -----Original Message----- > From: Josh Green [mailto:jg...@us...] > Sent: Monday, January 20, 2003 11:53 AM > To: Mark Knecht > Cc: lin...@li...; Swami Devel > Subject: RE: [Linuxsampler-devel] Hi - Very quiet list - my first post > > > On Mon, 2003-01-20 at 09:15, Mark Knecht wrote: > > <cut> > > > > > As always, I probably have too many ideas, but personally > I'd really like > > to see a sample player somewhat like Battery for Linux. I think > that it's a > > much more simple interface, requires far fewer system resources to work > > well, could use Wave files easily. I just think overall it > would get a lot > > of use up front. The existing sample players, like iiwusynth > and timidity > > are not, IMO, really very good for dealing with drum sets. > > > > Has there been any discussion of doing something like that? > > > > What do you find lacking with iiwusynth and drum sets? If you are > talking of creating drum sets its probably more of the patch editor that > needs to be up for this task. I have had a number of ideas in the area > of making it easier to create drum kits with Swami. I would like to know > what kinds of features you could envision for your own uses. > > <cut> > > > > > Cheers, > > Mark > > > > Cheers. > Josh Green > > |