You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
|
From: Peter <ma...@ve...> - 2002-11-04 13:48:46
|
|
From: Steve H. <S.W...@ec...> - 2002-11-04 12:16:32
|
[Peter, I'm assuming you meant to mail this to the list, I'm replying to the list anyway ;)] As discussed on IRC last night, the problem is that some sample formats have features that can't easily be implemented with a disk based generic engine, for example the AKAI sample format allows you to vary the start point with note on velocity (though I dont know by how much). I think that some hardware samplers allow you to modulate the loop points in realtime, though the 3000 series AKAIs cannot aparently. So, I think it is better to have seperate sub-engines that communicate with the main engine at a high level (eg. to the sub-engine: "Here is a bunch of event data ...", from the sub-engine: "I want 8 outputs", "here is a lump of audio data ..."). Though obviously data transfer would be callback based or something. The alternative would be to normalise all the sample formats into one, grand unified sample format and just handle that (I believe that is how gigasampler works?). I suspect that is less efficient though, and it doesn't allow for specific support for styles of sample playback. I think it mould make sense to preparse the event data, rather than trying to handle raw midi. Mayeb using something like the OSC event stream? Anyone know of other preparsed event formats? - Steve On Mon, Nov 04, 2002 at 09:06:08 +1000, Peter wrote: > i persoanlly like the idea of a sampler construction kit... > or at least a modularised sample engine.. > > my agenda is more towards loop sampling/re-sequencing... > normal event handling in samplers(especiall the akia's) doesn't lend > itself that kind of stuff > i'll probably be more inclinded to work towards the yamaha style of > things (ish)... > > i've been playing around with some ideas over the past few months > > i'd like for the sampler disk streaming, audio i/o and midi channel > routing (eg. noteon/off,pith,mod NOT cc or rpn/npn data) to be handled > by the base engine > aka i/o engine > > then, when a file is loaded onto a layer (midi-channel) > the base class calls the respective sampler extension.. > which handles everything on the channel, from sample-loading to > note-on-off handling to audio and even midi outputs..depending on > the type.. > > that way you could have say, a instrument extension, which could load > dls's or soundfonts > a akai extension that loads akai files > etc.etc. > umm.. > i guess thats enough for the time being > cheers > [3] |
|
From: Benno S. <be...@ga...> - 2002-11-03 15:09:25
|
Hi, I'm forwarding an excerpt of a mail from Paul Kellett he sent me. He said he will help us out with sample library importing (he is the guy that wrote the AKAI sample format docs you find on the web) and other DSP issues. Yay ! cheers, Benno ---- > I discussed some issues on IRC with Steve H. and Juan L. about supporting > multiple sample formats and the debate is if it is more convenient to > implement standard engines (AKAI s1000 , gigasamp. etc) in a static way > or use the audio unit compiler we have in mind and design these "standard" > engines using a graphical editor. Not sure I understand this... But there are 2 stages which can be separate: - loading the program(patch) and sample information - accessing the sample data, which could just be a list of where to find the sample data and what format the data is in, and not care if it is an individual Akai sample, or part of a .GIG file. Maybe there could be an intermediate program format, which any foreign formats need to be converted to, but the sample data stays in the original file(s). I'm not sure how important export to foreign formats is? It looks like lots of sample libraries for Giga are using all it's triggering options, to have different "pages" of samples available by MIDI control, so this would be important to people wanting an alternative to Gigasampler. Paul ----- -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
|
From: Steve H. <S.W...@ec...> - 2002-11-02 18:43:51
|
On Sat, Nov 02, 2002 at 07:37:36 +0100, Benno Senoner wrote: > The question is: do we build a single "one-size-fits-all" engine and > write loaders for various sample formats > trying to fit the original sample parameters (filters,envelopes etc) in > such a way that they sound as close as on the > original or is is better to implement separate engines for each type of > sample library (eg akai s1000, SF2, GIGA, etc). > associated with the related sample loader. I think that the best approach is to make the sample loaders mini engines, all the things like how the sampler handles note off etc. will vary a lot from sampler to sampler. If we just make the engine provide the MIDI routing and parsing, and deal with jack i/o stuff then the individual sub engines can do whatever they like*. It also means we can get up and running with a single sampler format without compromising the design, as long as the interface between the main engine and the sub engines in general enough. If the sub engines want to use recompilation techniques then the main engine can just export an API to handle that. * ...although this makes me think, playing devil advocate, maybe we should not be aiming for one giant engine that will handle every sample format known to man, maybe we should make a "sampler construction kit", that allows people to bolt on thier sample loading code and sampler emulating code and build a sampler out of that. It would encourage lots of simple, special purpose tools and avoid toolkit issues - Steve |
|
From: Benno S. <be...@ga...> - 2002-11-02 16:34:43
|
H, Yesterday I had a discussion on IRC with Steve H. and Juan L. about some issues regarding LinuxSampler: Since our goal is to provide a sampler that can work with a large number of sample library formats we need to implement engines that can reproduce so that the samples sound as they were played on the original hardware/software sampler. (or at least coming very close). The question is: do we build a single "one-size-fits-all" engine and write loaders for various sample formats trying to fit the original sample parameters (filters,envelopes etc) in such a way that they sound as close as on the original or is is better to implement separate engines for each type of sample library (eg akai s1000, SF2, GIGA, etc). associated with the related sample loader. Since the plan is to use compilation techniques in order to allow very flexible signal flow diagrams while providing speed that is at par with hardcoded designs, my question was if it would be better to implement these commercial sampler designs (AKAI,SF2,GIG etc) using our signal network builder (It will probably become a graphical editor) without writing any (or almost zero) C/C++ code. Of course we need and associated loader and probably it is hard to not handcode the single loaders , but for the engine I think one could avoid the implementation step once a powerful signal network builder is implemented. Juan says that we should use the signal network compiler only for future designs and for experimental stuff for now and start to provide hardcoded versions of the sampler engines mentioned above while converting them to a signal network at a later stage. (when the network compiler will be sufficiently evolved). While this could provide some short term advantages (faster results , perhaps more developers jumping on the bandwaggon) it is IMHO a bit of a waste of time. I have not that much experience writing very large audio applications but my proof of concept linuxsampler code (see home page) while still small and despite organized in C++ classes, started to look unclean and hard to maintain since every design decision is embedded deep in the code. Generating notes from MIDI events requires performing several tasks that are dependent each other. I'm thinking for example about: handling the keymap: - which notes are on/off , - handling multiple note on on the same note on a certain channel (eg does a note off mute the first note or the last one ? we could make this configurable by using linked lists assigned to each key on the MIDI scale) - sustain pedal - different key/velocity/controller zones trigger different samples with different parameters voice generation stuff: - sample playback (from RAM/disk) - looping (needs to work in synergy with the sample playback) - modulation, enveloping, filters and FXes that become active based on the instrument's preset etc Within the code I'd like to these things separate in a clean way but I think it is not that simple with hardcoded designs since you (or at least I) tend to optimize things and perform serveral tasks within the same routine thus effectively merging things that belong to different layers. This is why I'm asking you folks about what the right way to do looks like in your opinion. Waiting for opinions from everyone ! Josh: regarding SF2, DLS importing, your help is very welcome, perhaps you could comment on how to best solve the multiple sample format importing problem. PS: regarding the name change of the project from EVO to LinuxSampler I did it for several reasons: first the name linux sampler makes it clear that is it a sampler for linux, second it will make it easier for users and developers to find us on search engines since the term "evo" brings up lots of unrelated terms. Plus I think the name linux in linuxsampler should advertise Linux as a viable audio platform. but nothing stops us to port it to let's say MacOS X too since it is an unix derivative ... perhaps generating interest for linux by pro-audio guys since they are almost all Mac users. cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
|
From: Josh G. <jg...@us...> - 2002-11-01 20:21:51
|
So you are resuming development of evo? Cool. I wanted to contact Benno again in particular since we spoke a year ago about having a SoundFont loading library. I have gone through several revisions of this idea and currently have a rather untested code base (but good API in my opinion) of a library called libInstPatch. Its based on GObject/glib 2.0 and has a fully multi-threaded patch object system. The lowlevel load/save routines still need an API re-work but you can check out the API at the Swami developers site: http://swami.sourceforge.net/devel.php libInstPatch is rather full featured because I'm also using it as the basis of Swami (the new name for Smurf if you didn't know already). My plans are to add other wave table patch file formats like DLS2, etc (right now it supports SoundFont 2.01). It might be nice to use this as a basis for linuxsampler as well and perhaps have a plugin to use linuxsampler with Swami (is that the name or is it evo?). So what do you think? Cheers! Josh Green |
|
From: Steve H. <S.W...@ec...> - 2002-11-01 19:39:17
|
On Fri, Nov 01, 2002 at 08:16:02 +0100, Christian Schoenebeck wrote: > But this kind of static routing is not very user friendly, is it? It's not > very convenient having to recompile everytime a small peace of the routing > was changed. I would definitely sacrifice some ms latency for the sake of an > easy an intiutive use ability. My idea was that the static graphs would be built from LADSPA sources, so the use can still edit graphs in realtime, but when they are happy with it they can hit "compile" and get some cycles back. - Steve |
|
From: Christian S. <chr...@ep...> - 2002-11-01 19:15:47
|
Es geschah am Montag, 28. Oktober 2002 20:54 als Benno Senoner schrieb: > I was toying with the idea of using some sort of recompilation > techniques where the user can graphically design the sampler's signal > flow (routing, modulation, FXes etc) which in turns get translated into > C code that get loaded as a .so file and executed within the sampler's > main app. This would make up for a very flexible engine while retaining > most of the speed of hard coded ones. But this kind of static routing is not very user friendly, is it? It's not very convenient having to recompile everytime a small peace of the routing was changed. I would definitely sacrifice some ms latency for the sake of an easy an intiutive use ability. |
|
From: Antti B. <ant...@mi...> - 2002-10-31 12:24:05
|
Benno Senoner wrote: > Hi, > thanks for willing to help us. > We plan to decouple sampler engine and GUI completely so you can treat > the sampler engine as an application able to run on an embedded device > :-) I forgot to mention I've been planning an audio + small LCD display (you know, those test only displays) superserver, with a GUI system of it's own. This way it would be a lot easier to drop LinuxSampler in. More about it later. -agb |
|
From: Benno S. <be...@ga...> - 2002-10-31 11:50:39
|
Hi, thanks for willing to help us. We plan to decouple sampler engine and GUI completely so you can treat the sampler engine as an application able to run on an embedded device :-) Regarding the development stage: Almost 2 years ago I wrote some proof of concept code that can stream 60 voices on PII hw from disk in real time at sub 5msec latencies and this is a good starting base for the new engine that will use recompilation techniques for maximum flexibility and speed. The sampler we have in mind is quite an evil beast since it is basically a combination of real time execution cores, disk streaming, efficient DSP algorithms, networking layers for remote control and possibily handling of clustered enviroments (you know musicians are CPU hungry people :-) ) and GUI stuff. This means we need many experts in many areas so your help is very welcome. cheers, Benno On Thu, 2002-10-31 at 09:44, Stoll, Jake wrote: > Hi all, > > I'm interested in helping out with the LinuxSampler development, I've just > subscribed so I'm not sure at what stage your at with the development (very > early i'm guessing). > > I've got general c/c++ programming ability (mostly c experience with control > systems and embedded devices), haven't had much linux gui or audio > programming experience. The amount of time I can commit varies due to work. > > Regards, > Jake. > |
|
From: Stoll, J. <St...@lo...> - 2002-10-31 08:45:10
|
Hi all, I'm interested in helping out with the LinuxSampler development, I've just subscribed so I'm not sure at what stage your at with the development (very early i'm guessing). I've got general c/c++ programming ability (mostly c experience with control systems and embedded devices), haven't had much linux gui or audio programming experience. The amount of time I can commit varies due to work. Regards, Jake. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. |
|
From: Juan L. <co...@re...> - 2002-10-30 01:54:17
|
Here's a control flow proposal for linuxsampler. It's based on my previous experiences with multitimbral/polyphonic sound synthesis applications... (some background at http://freshmeat.net/~reduz) The control flow is basically how "objects control the state of other objects", (data flow is usually in the opposite way). Movie this diagram to a lower level (C++) IT can be taken as "Which class knows/includes/uses which class" :) The * in EDITOR means that i'd like to develop that area further in deep. Attached is the diagram in 3 formats: DIA,EPS and PNG (only 100k overall anyway) so you can read it with whathever app fits better for you. Cheers! Juan Linietsky |
|
From: Frank N. <bea...@we...> - 2002-10-30 00:16:07
|
confirm 172218 |
|
From: Juan L. <co...@re...> - 2002-10-29 22:16:04
|
On Tue, 29 Oct 2002 15:55:41 -0600 "Richard A. Smith" <rs...@bi...> wrote: > Benno: > > Good to see you back in business again. I'm looking forward to > burning up the free time I don't have messing with linuxsampler. > > My vote on programming languages would be C++. Really, if you code > things in a modular and decoupled way like all the books preach then > you C looks and acts like C++ anyway. Unless you are really doing > some fancy template run time typing stuff there is little difference > between C and C++. > I couldnt agree more! So I second that vote. > In 90% of the code you _won't_ need the speed advantage that C > offers. Which is largely a myth anyway its more the programmer than > the language. Yes there are some run time issues with C++ but unless > you are really carefull about how you code your C it probally won't > be the language choice that slows things down. GCC is amazingly good > at optimizing things once you tweak the parameters right. Such is true, I wouldnt care about optimization either since gcc does a fantastic work. I wrote my tracker in C++, and the mixing is actually faster than mikmod (which is written in C). > > I grew up learning C and that doing things via pointers was always > faster than using arrays. Yeah, 10 years ago this was true but GCC > broke me of this. My best attempt at stepper motor control algo I > was working on was soundly stomped on by just using arrays and > letting GCC optimize. > The same happened to me recently while doing ARM development. If you do things the "normal" way (ie using indexes and stuff) in most cases gcc will be able to optimize better than if you do the optimization yourself. I've even seen many cases recently of people who tried to write asm modules to optimize and ended up frustrated that gcc optimized more than their asm code. > > Oh and I like the socket interface for the GUI to engine > communication. This allows them to be on different machines or even > different architectures which considering the Mac heavy world that > most studios have is probally a good thing. > Yeah, I had in mind that musicans could build a linux box only to dedicate it to a sampler.. (in other words, building a cheap sampler box, since most commercial ones are over 2k USD). This way frontends can be done natively in any OS without having to resort to more complicated things such as the X protocol/Xlib/etc and without the sampler needing to have all those libs installed. And it could also be good if, in a future, we want to profit from this project by selling PCs specially built, configured and tuned in such a way that they cant be distinguished from a real hardware samplers :) Cheers! Juan Linietsky |
|
From: Juan L. <co...@re...> - 2002-10-29 22:00:57
|
. lets hope this time makes it thru |
|
From: Richard A. S. <rs...@bi...> - 2002-10-29 21:56:29
|
Benno: Good to see you back in business again. I'm looking forward to burning up the free time I don't have messing with linuxsampler. My vote on programming languages would be C++. Really, if you code things in a modular and decoupled way like all the books preach then you C looks and acts like C++ anyway. Unless you are really doing some fancy template run time typing stuff there is little difference between C and C++. In 90% of the code you _won't_ need the speed advantage that C offers. Which is largely a myth anyway its more the programmer than the language. Yes there are some run time issues with C++ but unless you are really carefull about how you code your C it probally won't be the language choice that slows things down. GCC is amazingly good at optimizing things once you tweak the parameters right. One thing I've learned over the years is that you can't predict where you wil _really_ need optimization, things are just too complex. You just have to code in the cleanest most maintainable way and then come back and optimize. If you try to always code-for-performance you usually end up with just the opposite. I grew up learning C and that doing things via pointers was always faster than using arrays. Yeah, 10 years ago this was true but GCC broke me of this. My best attempt at stepper motor control algo I was working on was soundly stomped on by just using arrays and letting GCC optimize. And really if you get _that_ concerned about performance then you should probally use arch specific assembly and/or things like MMX/3dNOW type stuff like all the current audio encodeing/decodeing libraries do. The management and maintaince benefits of C++ seem to be a clear winner for a project like this. Oh and I like the socket interface for the GUI to engine communication. This allows them to be on different machines or even different architectures which considering the Mac heavy world that most studios have is probally a good thing. If you use a shared memory setup they you also greatly add to the work level needed to use other programming languages for the front end GUI. So like, if I wanted to use my new favorite language python a set of bindings would have to be written first but if its just a socket then it's easy. -- Richard A. Smith Bitworks, Inc. rs...@bi... 479.846.5777 x104 Sr. Design Engineer http://www.bitworks.com |
|
From: Juan L. <co...@re...> - 2002-10-29 20:36:03
|
just a test, excuse me. |
|
From: Juan L. <co...@re...> - 2002-10-29 20:24:01
|
On Mon, 28 Oct 2002 14:26:55 -0800 lin...@li... wrote: > Linuxsampler-devel -- confirmation of subscription -- request 730633 > > We have received a request from 24.232.150.110 for subscription of > your email address, <co...@re...>, to the > lin...@li... mailing list. To confirm the > request, please send a message to > lin...@li..., and either: > > - maintain the subject line as is (the reply's additional "Re:" is > ok), > > - or include the following line - and only the following line - in the > message body: > > confirm 730633 > > (Simply sending a 'reply' to this message should work from most email > interfaces, since that usually leaves the subject line in the right > form.) > > If you do not wish to subscribe to this list, please simply disregard > this message. Send questions to > lin...@li.... > |
|
From: Steve H. <S.W...@ec...> - 2002-10-29 17:52:42
|
On Tue, Oct 29, 2002 at 07:50:32 +0100, Benno Senoner wrote: > Ah interesting, so there are problems when the inner loop is getting to > big. Yes, mostly for the cache reasons you mentioned. > Any ideas what the best compromise looks like (of course it depends from > the cache size but it would be cool to have a method to figure it out). I think it would be hard/bad to try. Refactoring C(++) code is really hard, I think its best to leave it as atomic as the DSP programmer made it. You wil be supprised how efficient it is, modern CPU are heavily optimised for small loops. > regarding the CV stuff. Is CV the ideal (and efficient) model to use in > a sampler ? Excuse my ignorance but I have not a big familiarity with > the CV paradigm (everything is a control value), any useful URLs that > handle his topic ? Hard to say if its efficient. It's a tradeoff, it hadles some things very well (ie. rapidy, unpredicatbly or continuously changing control values), but it handles things that are mostly constant less efficiently than event based systems. I like it becasue it is easy to understand, model and process, and it has good worst case performance. I dont know of any resources, but the principal is very simple. Frequencies are represented as an exponential scale of octaves, amplitudes and time linearly. This covers everything and is very modular. The obvious system for a sampler would be events for MIDI notes generating sample buffer activity, and CV for internal parameter control. This should get the best of both worlds. It is how SSM works, and that is very efficient and versatile. I am biased in favour of CV, so you should mayeb look for some oposing arguments, I'm not aware of any ;) - Steve |
|
From: Benno S. <be...@ga...> - 2002-10-29 17:40:58
|
Ah interesting, so there are problems when the inner loop is getting to big. Any ideas what the best compromise looks like (of course it depends from the cache size but it would be cool to have a method to figure it out). We could run some heurystics tha tries to break a large signal processing chain into smaller loops till best performance is achieved. regarding the CV stuff. Is CV the ideal (and efficient) model to use in a sampler ? Excuse my ignorance but I have not a big familiarity with the CV paradigm (everything is a control value), any useful URLs that handle his topic ? cheers, Benno Steve wrote: > in linuxsampler it would be better to process all in a single pass > eg: Not neccesarily, once the inner loop goes over a certain size it becomes /very/ inneficient. Infact one of the common ways of optimising bigish plugins is to break them into smaller passes. Obviously you wont win every time, but I think inlined finction calls to LADSPA run() routines will be faster on average. Obviously, things like mixers should be native, but not for eg. ringmods (multipliers), a decent ringmod has antialiasing code in (though mine doesn't, yet). > An additional issue about LADSPA is that it is only suitable for audio > processing but not for midi (or event)-triggered audio generation. No, its not suitable for MIDI, but it can certainly do unit generation, infact AFAIK the only bandlimited oscilators for Linux are LADSPA plugins. You just the the CV model for triggering. - Steve |
|
From: Antti B. <ant...@mi...> - 2002-10-29 17:27:56
|
Steve Harris wrote: > On Tue, Oct 29, 2002 at 02:52:45 +0200, Antti Boman wrote: >>Seems fine. If it's not a Plain Stupid Idea (tm), compiling could be >>made in a different thread. OTOH, if you have the possibility to test > > I think ti would have to be a different process. If you exec() gcc it will > take over the current process. Ah, of course, didn't think of it well enough. fork() or rather not. -a |
|
From: Steve H. <S.W...@ec...> - 2002-10-29 15:19:29
|
On Tue, Oct 29, 2002 at 04:25:41 +0100, Benno Senoner wrote:
> The process() code of LADSPA usually does
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.....
> }
...
> in linuxsampler it would be better to process all in a single pass
> eg:
Not neccesarily, once the inner loop goes over a certain size it becomes
/very/ inneficient. Infact one of the common ways of optimising bigish
plugins is to break them into smaller passes.
Obviously you wont win every time, but I think inlined finction calls to
LADSPA run() routines will be faster on average.
Obviously, things like mixers should be native, but not for eg. ringmods
(multipliers), a decent ringmod has antialiasing code in (though mine
doesn't, yet).
> An additional issue about LADSPA is that it is only suitable for audio
> processing but not for midi (or event)-triggered audio generation.
No, its not suitable for MIDI, but it can certainly do unit generation,
infact AFAIK the only bandlimited oscilators for Linux are LADSPA plugins.
You just the the CV model for triggering.
- Steve
|
|
From: Steve H. <S.W...@ec...> - 2002-10-29 15:08:32
|
On Tue, Oct 29, 2002 at 02:52:45 +0200, Antti Boman wrote: > Seems fine. If it's not a Plain Stupid Idea (tm), compiling could be > made in a different thread. OTOH, if you have the possibility to test I think ti would have to be a different process. If you exec() gcc it will take over the current process. - Steve |
|
From: Phil K. <phi...@el...> - 2002-10-29 14:38:09
|
Hi,
UDP would probably be better than TCP for GUI control, it's easier and
faster to implement. From the remote control work I've done I've tried
to keep the mappings as close to MIDI as possible as this allows remote
MIDI hardware to interface easier, but this does mean a decision whether
to use MIDI's 7 bit resolution or to move up to a higher resolution. A
lot of performers like to 'twiddle knobs' so sticking with MIDI is a
plus side.
Using a networked interface also allows you to mix and match what
toolkits for the GUI. QT is nice and fast and the signal and slot
mechanism is really easy, even for a C developer. The QT Designer can
rapidly put together a basic interface. GTK is another good choice.
Phil
On Tue, 2002-10-29 at 15:25, Benno Senoner wrote:
> Hi,
> I thought about using "interpreted" LADSPA as a preview and
> subsequent compilation of the code.
>
> Well, I do not see major hurdles to do this, but inlining LADSPA code,
> even it results in faster code than calling a chain of LADSPA plugins,
> it is still suboptimal especially with very small buffer sizes.
>
> The process() code of LADSPA usually does
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.....
> }
>
> this means that chaining (at source level) two LADSPA plugins it results
> in:
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.... (DSP algorithm 1)
> }
> for(i=0;i< numsamples; i++) {
> sample[i]=..... (DSP algoritm 2)
> }
>
> in linuxsampler it would be better to process all in a single pass
> eg:
>
> for(i=0;i< numsamples; i++) {
> sample[i]=.... (DSP algorithm 1)
> sample[i]=..... (DSP algoritm 2)
> }
>
> So while being able to chain LADSPA plugs at source level is still very
> useful, I would opt to provide "native" signal processing units like
> adders, multipliers, filters etc too because they can be inlined without
> requiring to loop again over the whole buffer.
>
> I think sample-by-sample inlining increases the performance quite a bit
> in the case of very small buffer sizes (my usual 32 sample buffer test
> case).
> What do you think ?
> How about caching isssues ? From the point of view of the cache is
> sample-by-sample inlining preferable over ladspa-style chaining ?
>
> An additional issue about LADSPA is that it is only suitable for audio
> processing but not for midi (or event)-triggered audio generation.
> This means we need internal audio unit generators anyway.
> It would be cool to modularize everything and let almost all the audio
> rendering code stay in the audio unit source files.
> This means that as an advanced audio hacker you can easily fire up your
> favorite editor and tweak the modules (or copy and modify them) to suit
> your needs.
>
> The only thing that is IMHO tied to the disk streaming engine is the
> sample looping part since both the audio and the disk thread need to be
> aware of looping information.
> I'm not a big hardware sampler expert but since we want flexibility I
> thought about using a linked list of loop points in order to allow
> arbitrary looping.
> eg a list of startpoint, endpoint where the single looped parts of
> the sample do not need to be interconnected each other.
> (with this kind of looping you could even load one single sample that
> contains the sounds of a drumset and trigger them in such a way to form
> a drumloop :-) )
>
> Regarding the enveloping I made some tests and I think I'd opt for the
> same principle as the looping stuff: a list of linear or second order
> segments with a starting point and a dx value(or in the 2nd order case
> an additional ddx value).
> This would allow to apply arbitrary envelope curves to parameters
> (volume, filter freqs) with low CPU and memory usage.
> Of course if you want to provide only simple ADSR support you can easily
> generate an appropriate envelope table for it.
>
> Since a sample is comprised of attack, looped (in our case we can have
> many looping segments) and release phase, we could have different
> enveloping curves for each phase.
> This would mean that the enveloping tables get switched when we go from
> one phase to the next one.
> This could be useful for thing like vibrato effects where the vibrato is
> applied only during loop and release phase but not in the attack phase.
> (same applies to filter freqs, pitch etc ... I think one could create
> pretty cool sounds with these things alone).
>
> Regarding GUIs: I agree with Juan that we need to decouple the sampler
> and the GUI completely (he even proposed using a TCP socket so that you
> can remote control the sampler form another machine).
> I'd go with shared mem/local IPC but if you say TCP has advantages so
> let's go for it.
>
> I think over the long term the GUI is important especially when support
> for what you hear is what you get is implemented, aka where you can edit
> samples and sounds without resorting to an external editor that saves
> the sample to file which the sampler is forced to reload into mem.
>
> Here I guess that Josh Green (the author of smurf/swami) would be very
> helpful in that area.
>
> cheers,
> Benno
>
>
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Linuxsampler-devel mailing list
> Lin...@li...
> https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel
|
|
From: Benno S. <be...@ga...> - 2002-10-29 14:16:13
|
Hi,
I thought about using "interpreted" LADSPA as a preview and
subsequent compilation of the code.
Well, I do not see major hurdles to do this, but inlining LADSPA code,
even it results in faster code than calling a chain of LADSPA plugins,
it is still suboptimal especially with very small buffer sizes.
The process() code of LADSPA usually does
for(i=0;i< numsamples; i++) {
sample[i]=.....
}
this means that chaining (at source level) two LADSPA plugins it results
in:
for(i=0;i< numsamples; i++) {
sample[i]=.... (DSP algorithm 1)
}
for(i=0;i< numsamples; i++) {
sample[i]=..... (DSP algoritm 2)
}
in linuxsampler it would be better to process all in a single pass
eg:
for(i=0;i< numsamples; i++) {
sample[i]=.... (DSP algorithm 1)
sample[i]=..... (DSP algoritm 2)
}
So while being able to chain LADSPA plugs at source level is still very
useful, I would opt to provide "native" signal processing units like
adders, multipliers, filters etc too because they can be inlined without
requiring to loop again over the whole buffer.
I think sample-by-sample inlining increases the performance quite a bit
in the case of very small buffer sizes (my usual 32 sample buffer test
case).
What do you think ?
How about caching isssues ? From the point of view of the cache is
sample-by-sample inlining preferable over ladspa-style chaining ?
An additional issue about LADSPA is that it is only suitable for audio
processing but not for midi (or event)-triggered audio generation.
This means we need internal audio unit generators anyway.
It would be cool to modularize everything and let almost all the audio
rendering code stay in the audio unit source files.
This means that as an advanced audio hacker you can easily fire up your
favorite editor and tweak the modules (or copy and modify them) to suit
your needs.
The only thing that is IMHO tied to the disk streaming engine is the
sample looping part since both the audio and the disk thread need to be
aware of looping information.
I'm not a big hardware sampler expert but since we want flexibility I
thought about using a linked list of loop points in order to allow
arbitrary looping.
eg a list of startpoint, endpoint where the single looped parts of
the sample do not need to be interconnected each other.
(with this kind of looping you could even load one single sample that
contains the sounds of a drumset and trigger them in such a way to form
a drumloop :-) )
Regarding the enveloping I made some tests and I think I'd opt for the
same principle as the looping stuff: a list of linear or second order
segments with a starting point and a dx value(or in the 2nd order case
an additional ddx value).
This would allow to apply arbitrary envelope curves to parameters
(volume, filter freqs) with low CPU and memory usage.
Of course if you want to provide only simple ADSR support you can easily
generate an appropriate envelope table for it.
Since a sample is comprised of attack, looped (in our case we can have
many looping segments) and release phase, we could have different
enveloping curves for each phase.
This would mean that the enveloping tables get switched when we go from
one phase to the next one.
This could be useful for thing like vibrato effects where the vibrato is
applied only during loop and release phase but not in the attack phase.
(same applies to filter freqs, pitch etc ... I think one could create
pretty cool sounds with these things alone).
Regarding GUIs: I agree with Juan that we need to decouple the sampler
and the GUI completely (he even proposed using a TCP socket so that you
can remote control the sampler form another machine).
I'd go with shared mem/local IPC but if you say TCP has advantages so
let's go for it.
I think over the long term the GUI is important especially when support
for what you hear is what you get is implemented, aka where you can edit
samples and sounds without resorting to an external editor that saves
the sample to file which the sampler is forced to reload into mem.
Here I guess that Josh Green (the author of smurf/swami) would be very
helpful in that area.
cheers,
Benno
|