You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(9) |
Dec
|
|
From: Vladimir S. <ha...@so...> - 2004-04-11 17:49:36
|
Mark, While i agree that writing audio to the same drive as the gigs reside on is not a very good idea this does not mean that the only way out is several machines . . . Sending data to the other machine is work too, and may even be more work . . . For example writing data to the sound or ethernet card may well be a lot more work comparing to writing the same data to the harddrive (depending on how good your hardware and drivers are. For instance if you have a hardware raid on 64bit wide PCI running at 133Mhz with several good SCSI drives hooked up to it in raid0 configuration we'll probably find that writing to that is a lot faster with a lot less overhead than any soundcard i know of). Take a look at some good server motherboards. They have several PCI busses running at different speeds and width and you could have harddrive(s) on one disk/adapter/bus for random access and another (and perhaps even another CPU with audio output thread attached to it?! or maybe even irq affinity attaching certain irqs to certain CPUs). In other words, there are multiple possibilities when it comes to performance, optimizations, etc. And sending data to another machine is not always the best solution. When it really comes to optimizations, there are lots of things we will need to do and lots of tools to use. Profilers and analyzers of different types, etc, etc. I've used Intel's SHG2 board in one project and it has been pretty good experience overall. When it comes to PCI there are some pretty good PCI analyzers out there as well :) Regards, Vladimir. Mark Knecht wrote: >On Sat, 2004-04-10 at 08:12, be...@ga... wrote: ><SNIP> > > >>You see 16 midi channels and can assign a GIG sample to each channel. >> >> ><SNIP> > > >>This makes LS easy to use. >>About the audio routing: I agree with Mark some internal routing is >>needed and for now jack port export for every sampler midi channel requiring >>the user to do the routing by himself is simply too complex and probably >>not that efficient. >>(ok in a later release we can add it since there will be people wanting >>to play a full performance but only record certain channels into ardour etc). >> >> >> ><SNIP> > >Benno and (by reference to another email) Vladimir, > Thanks for the comments. I guess we agree in many ways. I'll point >out that I clearly understand that my comments are based pretty heavily >on using GSt for a few years now, and I come from both it's limitations >and its strengths. That said, I'll make a couple of points that I didn't >make before, but I think are topical about the comments above. The point >here is that if you have 16 gigs loaded, you'll need an audio mixer of >some type as most people won't have enough hardware to get all of that >audio to the hard disk recorder as separate channels. > >1) I do not believe that the *full* realization of LS's capabilities, if >even limited to only 16 gigs, will ever be realized running LS on the >same machine as Ardour (or any other hard disk recording program) for >any standard hardware configuration. The use of the hard disk is too >different between Ardour doing essentially linear access and LS doing >random access. Add in conflicts on the PCI bus, etc., and I think LS >must be on it's own machine to really do 16 gigs and greater than 100 >note polyphony. After all, 100 notes is only 6-7 notes per gig when >using 16 gigs. (Keep in mind that even GSt 2.5 handles up to 64 gigs >loaded at the same time.) > >2) If we have a sampler that holds 16 gigs, and gigs are by default >stereo, then we are taking 32 individual audio channels. If LS is on a >separate machine, then we are talking about 4 ADAT cables to wire the >machines together if you wanted to record all of this audio in one pass. >That's a lot of money to do something that you can do effectively in >multiple passes later. What I do with GSt is to mix 16 or more gigs >together into (generally) 4 stereo pairs (percussion, strings, synths, >other) and run them into Pro Tools while I compose. I don't need all 32 >audio channels separate while I write. When I'm done I then record up to >4 instruments at a time keeping all the audio as separate as I need to. >Sometimes I'll mix a bunch of strings together into a stereo group and >save tracks in Pro Tools. > > Hope this helps make a bit of sense out of my views. > >Thanks, >Mark > > > > >------------------------------------------------------- >This SF.Net email is sponsored by: IBM Linux Tutorials >Free Linux tutorial presented by Daniel Robbins, President and CEO of >GenToo technologies. Learn everything from fundamentals to system >administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click >_______________________________________________ >Linuxsampler-devel mailing list >Lin...@li... >https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > > |
|
From: Mark K. <mar...@co...> - 2004-04-11 17:01:41
|
On Sat, 2004-04-10 at 08:12, be...@ga... wrote: <SNIP> > You see 16 midi channels and can assign a GIG sample to each channel. <SNIP> > This makes LS easy to use. > About the audio routing: I agree with Mark some internal routing is > needed and for now jack port export for every sampler midi channel requiring > the user to do the routing by himself is simply too complex and probably > not that efficient. > (ok in a later release we can add it since there will be people wanting > to play a full performance but only record certain channels into ardour etc). > <SNIP> Benno and (by reference to another email) Vladimir, Thanks for the comments. I guess we agree in many ways. I'll point out that I clearly understand that my comments are based pretty heavily on using GSt for a few years now, and I come from both it's limitations and its strengths. That said, I'll make a couple of points that I didn't make before, but I think are topical about the comments above. The point here is that if you have 16 gigs loaded, you'll need an audio mixer of some type as most people won't have enough hardware to get all of that audio to the hard disk recorder as separate channels. 1) I do not believe that the *full* realization of LS's capabilities, if even limited to only 16 gigs, will ever be realized running LS on the same machine as Ardour (or any other hard disk recording program) for any standard hardware configuration. The use of the hard disk is too different between Ardour doing essentially linear access and LS doing random access. Add in conflicts on the PCI bus, etc., and I think LS must be on it's own machine to really do 16 gigs and greater than 100 note polyphony. After all, 100 notes is only 6-7 notes per gig when using 16 gigs. (Keep in mind that even GSt 2.5 handles up to 64 gigs loaded at the same time.) 2) If we have a sampler that holds 16 gigs, and gigs are by default stereo, then we are taking 32 individual audio channels. If LS is on a separate machine, then we are talking about 4 ADAT cables to wire the machines together if you wanted to record all of this audio in one pass. That's a lot of money to do something that you can do effectively in multiple passes later. What I do with GSt is to mix 16 or more gigs together into (generally) 4 stereo pairs (percussion, strings, synths, other) and run them into Pro Tools while I compose. I don't need all 32 audio channels separate while I write. When I'm done I then record up to 4 instruments at a time keeping all the audio as separate as I need to. Sometimes I'll mix a bunch of strings together into a stereo group and save tracks in Pro Tools. Hope this helps make a bit of sense out of my views. Thanks, Mark |
|
From: Vladimir S. <ha...@so...> - 2004-04-10 16:47:40
|
Hi, One may agrue that in some configurations sample files will be loaded on the same workstation where GUI is running, not on the dedicated sampler machine. Sampler machine may not even have any means (CD, DVD, etc) to load files on it's internal harddrive. So, this may mean that LS should not just provide directory listing but in fact a lot more functions making it yet another file server. All kinds of issues will come up . . . security, efficiency, etc . . . People have spent years writing file servers. At this point one might argue that file management is above and beyond what sampler should do. I'm beginning to suspect that it might be easier to work with existing file serverving methods. Meaning it's client responsibility to do his own directory listing. If client (GUI) happens to be running on the same PC then great, no problems. If it is running on a nother PC let's make them mount the directory where sample files should go. Whatever method they choose (samba, nfs, etc) is not up to us. Then we don't have to manage their permissions. If several GUIs from several users from several workstations are using one (or moce) LS, they'll not have to argue about who has permissions to read/write/modify who's sample files. It will all be file server's responsibility. I am however not so sure about audio output "stuff". I know that users on the GUI side would like to have a drop down list with available audio to work with instead of having to type something like "alsa card 123 channel 1". So, should LS provide a list of all possible outputs? How will it handle stuff that changes underneath is (like jack's outputs appearing and disappearing). I'm thinking GUI probably sends a request to list current "stuff" and then if it feels like doing a refresh it does it again. It may feel like it if certain errors happen or if user clicks a button . . . Regards, Vladimir. ----- Original Message ----- From: <be...@ga...> To: <lin...@li...> Sent: Saturday, April 10, 2004 11:12 AM Subject: [Linuxsampler-devel] LS future, multiengine , audio channels > Hi, > what Christian said makes sense (both the multi engine stuff, audio channel > allocation etc). > > The important thing is that the > LS server provides all these functions to the client via LSCP > plus file management, eg the LS server export a dir (and all its subdirs) > and the client can browse and load samples from it using LSCP. > Only with that feature you can provide true remote client/server > operation. > > Regarding the thoughts that Mark expressed. > I agree with him, we need to hide the complexity of LS behind > an easy to use GUI. > Assume we provide only GIG playback for now: > For the beginning a GUI like I proposed (load & play, sent the sources > to Rui sometime ago) is probably the best. > You see 16 midi channels and can assign a GIG sample to each channel. > You can tweak some parameters like volume,pan,reverb,chorus > (but LS should respond to midi CC too and the GUI should autoupdate > the values/sliders via LSCP). > > This makes LS easy to use. > About the audio routing: I agree with Mark some internal routing is > needed and for now jack port export for every sampler midi channel requiring > the user to do the routing by himself is simply too complex and probably > not that efficient. > (ok in a later release we can add it since there will be people wanting > to play a full performance but only record certain channels into ardour etc). > > So what I propose is the following (to resemble a standard MIDI sampler > or soundmodule): > all the voices of a sample on the same MIDI channel get downmixed > on a stereo bus which allows the following parameters: > volume, panorama, reverb, chorus > For the reverb and chorus only a single instance of the FXes is needed since > we will have reverb and chorus send level for each midi channel. > For reberb and chorus I propose LADSPA which makes it easy to change > the type of FX , add more etc. > As default I'd load freeverb (or gverb?) for the Reverb and swh-chorus > for the chorus. (swh, something that does not tax the CPU too much but > sounds good, what was that plugin again ? 4 voice chorus?). > > cheers, > Benno > http://www.linuxsampler.org > > > > ------------------------------------------------- > This mail sent through http://www.gardena.net > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
|
From: <be...@ga...> - 2004-04-10 15:12:02
|
Hi, what Christian said makes sense (both the multi engine stuff, audio channel allocation etc). The important thing is that the LS server provides all these functions to the client via LSCP plus file management, eg the LS server export a dir (and all its subdirs) and the client can browse and load samples from it using LSCP. Only with that feature you can provide true remote client/server operation. Regarding the thoughts that Mark expressed. I agree with him, we need to hide the complexity of LS behind an easy to use GUI. Assume we provide only GIG playback for now: For the beginning a GUI like I proposed (load & play, sent the sources to Rui sometime ago) is probably the best. You see 16 midi channels and can assign a GIG sample to each channel. You can tweak some parameters like volume,pan,reverb,chorus (but LS should respond to midi CC too and the GUI should autoupdate the values/sliders via LSCP). This makes LS easy to use. About the audio routing: I agree with Mark some internal routing is needed and for now jack port export for every sampler midi channel requiring the user to do the routing by himself is simply too complex and probably not that efficient. (ok in a later release we can add it since there will be people wanting to play a full performance but only record certain channels into ardour etc). So what I propose is the following (to resemble a standard MIDI sampler or soundmodule): all the voices of a sample on the same MIDI channel get downmixed on a stereo bus which allows the following parameters: volume, panorama, reverb, chorus For the reverb and chorus only a single instance of the FXes is needed since we will have reverb and chorus send level for each midi channel. For reberb and chorus I propose LADSPA which makes it easy to change the type of FX , add more etc. As default I'd load freeverb (or gverb?) for the Reverb and swh-chorus for the chorus. (swh, something that does not tax the CPU too much but sounds good, what was that plugin again ? 4 voice chorus?). cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Christian S. <chr...@ep...> - 2004-04-10 11:07:29
|
Es geschah am Samstag, 10. April 2004 12:35 als Christian Schoenebeck schrieb: > creates mix channels for the engine instead. So assuming the engine needs 4 > channels (Alsa offers only 2), the Alsa audio output device will then Oh, and to avoid to get mail bombed by you: of course Alsa supports more than two channels (dependent on the actual hardware), this was just meant as a scenario. Excuse my inaccuracy! CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-04-10 10:47:40
|
Ok, regarding audio channels my idea is the following (remember some engines only know the amount of audio channels they need, when they have loaded an instrument): again, when you start LS there's nothing. You have to create a virtual audio output device (Alsa, Jack, CoreAudio, VSTi,...). You can also create more than one audio output device of course (means even an device for Alsa, one for Jack, one for VSTi,...). You add a sampler channel, load an engine (Gig or DLS or Akai or ....) for that sampler channel, connect a (created) audio output device to that sampler channel and load an instrument. Now that the engine knows how much audio channels it needs, it sends a message to the connected audio output device to inform it how much audio channels the engine needs. It's the responsibilty of this virtual audio output device to create those output channels. For Jack this is easy: the Jack audio output device just increases or decreases the amount of Jack ports, but in case of Alsa for example it cannot just simply create new physical output ports, but it creates mix channels for the engine instead. So assuming the engine needs 4 channels (Alsa offers only 2), the Alsa audio output device will then create channel 3 and 4 which are actually only a copy of channel 1 and 2, means the engines's output on channel 3 and 4 will simply be mixed to the Alsa device's output channel 1 and 2. Every audio output channel of audio output devices will have descriptions or something, so you can see in the frontend, if these channels are only mix channels or real channels. This means also the frontend must be capable to manage those audio output devices, create, destroy devices or for example in case of Alsa it would be nice when you can add further _real_ audio output channels, so you can really use more than 2 channels and they're not simply mixed to channel 1 and 2. Of course we would have to extend LSCP for managing audio output devices. Oh, and btw I will change the internal sample type (which is currently int16_t) to float. This is necessary for various DSP algorithms in future. Comments? Doubts? Suggestions? CU Christian |
|
From: Vladimir S. <ha...@so...> - 2004-04-09 00:51:23
|
Hi guys, my 2 cents . . . I'd like to keep the server side as simple as possible. I found that usually, if there is a config file i need to figure out what the syntax is. If there are defaults, we need to agree on them. If there are command line options, we need to argue about them, keep them updated, etc, etc. All of it has to be documented, documentation has to be updated, etc, etc. This is what we call a "support nightmare" or just a hassle (depending on how ugly it gets). We already have a language that is supposed to be good enough for any configuration (well, ok, maybe we don't have it yet, but we are working on it). So if config file is nothing but those statements i will argue that if a user really wanted to have those thing in a file he/she could just start a script that will launch ls and netcat that file into it. Plus if we have the GUI configuring everything and perhaps even starting the backend, then why have the config file at all? Command line options are perhaps needed for stuff that MUST happen during bootup of the backend. For example, if we wanted to limit how much memory it can allocate and lock that perhaps should be configured via the command line option to give administrators of samplers and perhaps even developers who will develop and sell "hardware" samplers based on LS some control over the resources of the system. So I'd vote for almost no command line options(except startup stuff that would otherwise require backend restart), no cli, no config file and no default config. GUI could have a "performance file" that will have everything starting from what LS to connect to how many of what to configure and what to load where. If there is ever a need, we could create an autogenerated CLI interface from the same parser as the server uses. Complete with comand completion, "?" and all that good stuff. I'm trying to create a router again?! No. NO CLI! :) Regards, Vladimir. ----- Original Message ----- From: "Christian Schoenebeck" <chr...@ep...> To: <lin...@li...> Sent: Wednesday, April 07, 2004 6:34 PM Subject: [Linuxsampler-devel] Multi Channel & Multi Engine Support > Hi! > > I'm currently working on this important feature (multiple sampler channels, > allowing each sampler channel using an independent engine, independent audio > output and MIDI input, etc.). But this step means a lot of behavior > decisions, so I wanted to hear the opinions of all of you, how LS should > behave... > > My idea is when you launch LS, there will be no sampler channel at all. You > first have to add a sampler channel, then choose one of the available sampler > engines for the new channel, choose an audio output system (Alsa, Jack, > CoreAudio) for the new channel, choose a MIDI input system (Alsa Seq, Midi > over Jack, CoreMIDI) and finally load an instrument for that channel. You > would have to repeat these steps of course when you add further sampler > channels. Consider these steps to be done with a frontend. Or do you think we > should start with a default number of channels, default sampler channel, > default audio output system, etc. ? If yes, where should these default values > come from? A config file? > But we could also move this "default" settings task to the frontend; means LS > will launch as I told above with no channel at all and in your graphical > frontend you will be able to define and load profiles with all your default > settings and the frontend will then send the corresponding LSCP commands to > LS to initialize LS with your default settings (e.g 16 channels, first four > channels with Akai Engine, next seven channels with Gig Engine, using Audio > output system XYZ, etc.). Would that be ok? > > Another question is how we should continue with the command line interface to > LS. Should we completely drop most command line parameters? Or should we > spend a lot of our time with developing a command line syntax for everything? > We may need to develop a parser for this, the simple getopt C lib (which > we're currently using) won't be sufficient for that. Even if we decide to > completely drop the command line paramers, you will still be able to control > LS completely by script: you could e.g. use tccat to send a LSCP script to LS > (via network / socket). > > Another issue is that sampler engines will have different amount of audio > channels (e.g. Gig is always stereo, another engine might be only mono, and > another engine might even have arbitrary audio channels (DLS for example), > depending on the currenlty loaded instrument. How should we handle that? > > Should one sampler channel even be controllable by multiple MIDI input systems > simultaniously? Or should we simply use a 1:1 relationship (one sampler > channel has exactly one midi input). > > Again, we're just talking about the user's point of view, how everything > should be controllable, behave, etc. So I hope to hear the opinions of ALL OF > YOU ! Maybe you even have a good idea of a feature whe should consider at > this point. So tell me please! > > CU > Christian > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
|
From: Mark K. <mar...@co...> - 2004-04-08 16:17:13
|
Christian, Hello. It's great to ear from you, and exciting to hear that work is taking place in this area. I'll give my 2 cents worth of input, for what it's worth. Overall, I'd like to preface the comments below that I'd most enjoy using LS if it presented itself as a GUI-based application to me. Maybe what the GUI looks like, or what the GUI does, could be defined in a config file somewhere, but when I type 'linuxsampler' at the command line I hope a GUI will present itself on my screen. 1) I have no personal interest today in any sampler engine other than one that plays gig format. I do own lots of sound fonts and some Akai samples, but I don't use them at all. For me, today, there is nothing but gig format. Nominally I start all my work with 6-8 gig files loaded at the same time, and generally, but the time I'm finished with a piece of work have anywhere from 12-16 gig files loaded. For this reason I'd prefer that at least one version of the default, GUI-based front-end of LinuxSampler handle 1-MIDI channel mapped to 1 stereo gig file, 16 channels wide to start. That's my normal workspace today. I don't want to go backward. 2) I sometimes go beyond 16 channels, so I use two MIDI interfaces for that. MIDI-A-0:15 going to one group of 16 gig files, and MIDI-B-0:15 going to a second group of 16 gig files. I admit I don't use this often singe the version of GSt I own limits me to 96 voices. Since GSt is stereo, this is only 48 notes, so it's not hard to get there with just 16 MIDI channels. I hope that LS works well enough to allow me to go far beyond that, but so far there has been no opportunity to test the code for more than one MIDI channel at a time, so it's hard to know how well it will use the hardware resources at this time. 3) I never use 2 MIDI channels to drive a single gig file. I *think* that there is no overhead to just loading the same gig file into two GSt channels. That sampler can use the same loaded samples of a piano, for instance, on both channel 1 and channel 2 without loading extra audio samples into memory. It's no different than receiving two note-on events on MIDI channel 1 for middle C or receiving one on channel 1 and one on channel 2. 4) I do, at times, use a single MIDI channel to drive multiple gig files. This is an area that is of interest to many people using orchestral libraries. It would be really excellent to have some MIDI event mapping capabilities built within LinuxSampler to handle things like key splits, velocity curve remapping, note-on/note-off event mapping, etc. I lobbied on Linux-Audio-User in the last few days that this would be better placed in a single application outside of each synth, but I received not a single bit of agreement for the idea. The general Linux Audio community seems more comfortable that every synth implement their own set of features in this area. For that reason I'll work with you later, if you're interested, to define what some of these features could look like. 5) As for the audio channel issue, it seems to me that there needs to be some sort of audio mixer similar to what's implemented in GSt. You'll need a small group of sub-mix buses for generalized reuse of common plugins, like reverb, or even just mixing of similar audio gigs, like multiple violin libraries on separate MIDI channels into a single violin audio stream. Something in LS needs to do do at least a minimal amount of audio mixing & mapping, in my opinion. I don't think that it's very user friendly to have every audio channel's output be a separate Jack output that has to get messed with outside of LS. I'm also not sure how well this will work when we look at remote VSTi implementations as viewed from a Windows environment, etc. At this point I've likely overstayed my welcome on your CRT, so I'll sign off for now. As always, I'm around. With best regards, Mark On Wed, 2004-04-07 at 15:34, Christian Schoenebeck wrote: > Hi! > > I'm currently working on this important feature (multiple sampler channels, > allowing each sampler channel using an independent engine, independent audio > output and MIDI input, etc.). But this step means a lot of behavior > decisions, so I wanted to hear the opinions of all of you, how LS should > behave... > > My idea is when you launch LS, there will be no sampler channel at all. You > first have to add a sampler channel, then choose one of the available sampler > engines for the new channel, choose an audio output system (Alsa, Jack, > CoreAudio) for the new channel, choose a MIDI input system (Alsa Seq, Midi > over Jack, CoreMIDI) and finally load an instrument for that channel. You > would have to repeat these steps of course when you add further sampler > channels. Consider these steps to be done with a frontend. Or do you think we > should start with a default number of channels, default sampler channel, > default audio output system, etc. ? If yes, where should these default values > come from? A config file? > But we could also move this "default" settings task to the frontend; means LS > will launch as I told above with no channel at all and in your graphical > frontend you will be able to define and load profiles with all your default > settings and the frontend will then send the corresponding LSCP commands to > LS to initialize LS with your default settings (e.g 16 channels, first four > channels with Akai Engine, next seven channels with Gig Engine, using Audio > output system XYZ, etc.). Would that be ok? > > Another question is how we should continue with the command line interface to > LS. Should we completely drop most command line parameters? Or should we > spend a lot of our time with developing a command line syntax for everything? > We may need to develop a parser for this, the simple getopt C lib (which > we're currently using) won't be sufficient for that. Even if we decide to > completely drop the command line paramers, you will still be able to control > LS completely by script: you could e.g. use tccat to send a LSCP script to LS > (via network / socket). > > Another issue is that sampler engines will have different amount of audio > channels (e.g. Gig is always stereo, another engine might be only mono, and > another engine might even have arbitrary audio channels (DLS for example), > depending on the currenlty loaded instrument. How should we handle that? > > Should one sampler channel even be controllable by multiple MIDI input systems > simultaniously? Or should we simply use a 1:1 relationship (one sampler > channel has exactly one midi input). > > Again, we're just talking about the user's point of view, how everything > should be controllable, behave, etc. So I hope to hear the opinions of ALL OF > YOU ! Maybe you even have a good idea of a feature whe should consider at > this point. So tell me please! > > CU > Christian > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Rui N. C. <rn...@rn...> - 2004-04-08 01:08:49
|
Hi Christian, I've been away for some time but now I'm back to LS once again. Did you miss me? :) OK. I think I have almost ready my liblscp thing, the C API interface to LSCP. Check it out from http://www.rncbc.org/ls . Documention level is still low, but it's getting doxygen'ed :) The server side has been improved, now it can operate in either of two modes, in option: single-threaded multiplex (one thread serves all clients) or multi-threaded (one thread per client). Think that's what came out from our discussions one month ago, Chris? However is the client side that has been most completed, and a working GUI frontend is in the verge. So I guess I'll reply to your questions in this regard, and pose some others in the mean time. > > I'm currently working on this important feature (multiple sampler channels, > allowing each sampler channel using an independent engine, independent > audio output and MIDI input, etc.). But this step means a lot of behavior > decisions, so I wanted to hear the opinions of all of you, how LS should > behave... > > My idea is when you launch LS, there will be no sampler channel at all. You > first have to add a sampler channel, then choose one of the available > sampler engines for the new channel, choose an audio output system (Alsa, > Jack, CoreAudio) for the new channel, choose a MIDI input system (Alsa Seq, > Midi over Jack, CoreMIDI) and finally load an instrument for that channel. > You would have to repeat these steps of course when you add further sampler > channels. Consider these steps to be done with a frontend. Or do you think > we should start with a default number of channels, default sampler channel, > default audio output system, etc. ? If yes, where should these default > values come from? A config file? My proposal goes quite obviously so that LS should behave just like a server, and LSCP should be the native script language for it. Therefore LS should: 1) have a default configuration script file, with the proper LSCP commands to setup all the initial engines, channels and corresponding parameters (eg: "~/.lscprc" for local and/or "/etc/lscp.conf" for global default configuration); 2) accept command line arguments as settings that override the default configuration; a startup configuration script file may be also a given as a command line argument and input from stdin should also be accepted. ; all scripting follows the LSCP syntax uniform. > But we could also move this "default" settings task to the frontend; means > LS will launch as I told above with no channel at all and in your graphical > frontend you will be able to define and load profiles with all your default > settings and the frontend will then send the corresponding LSCP commands to > LS to initialize LS with your default settings (e.g 16 channels, first four > channels with Akai Engine, next seven channels with Gig Engine, using Audio > output system XYZ, etc.). Would that be ok? > The frontend talks to LS via a socket interface, purely as a client. It's my idea that one will have a configuration specific to each client instance, that shall be made persistent across frontend sessions. Any current LS server configuration state is supposed to be translated and saved as a corresponding LSCP script that can be used to restore the same state again in later sessions. Think this must be a trivial feature of any frontend that deserves it's name :) > Another question is how we should continue with the command line interface > to LS. Should we completely drop most command line parameters? Or should we > spend a lot of our time with developing a command line syntax for > everything? We may need to develop a parser for this, the simple getopt C > lib (which we're currently using) won't be sufficient for that. Even if we > decide to completely drop the command line paramers, you will still be able > to control LS completely by script: you could e.g. use tccat to send a LSCP > script to LS (via network / socket). > See above. I think you can keep current command line spec, but adding a new option for an initial configuration lscprc file, but I'll suggest that the current --server option is made always active by default, so dropping it it's probably OK :) > Another issue is that sampler engines will have different amount of audio > channels (e.g. Gig is always stereo, another engine might be only mono, and > another engine might even have arbitrary audio channels (DLS for example), > depending on the currenlty loaded instrument. How should we handle that? > > Should one sampler channel even be controllable by multiple MIDI input > systems simultaniously? Or should we simply use a 1:1 relationship (one > sampler channel has exactly one midi input). > I suppose that depends on the MIDI subsystem you're choosing to, but it comes to mind that I find a more realistic approach that one sampler channel maps to one MIDI channel, and OTOH one MIDI input port may feed several sampler channels. > Again, we're just talking about the user's point of view, how everything > should be controllable, behave, etc. So I hope to hear the opinions of ALL > OF YOU ! Maybe you even have a good idea of a feature whe should consider > at this point. So tell me please! > Now wandering... I was thinking about the GUI frontend in having the option to start the LS server itself, in case it finds that there's none currently active. Of course this only applies for the frontend being in the same local machine as the server (localhost). Nuff said. Cheers. -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: Rui N. C. <rn...@ne...> - 2004-04-08 01:06:25
|
Hi Christian, I've been away for some time but now I'm back to LS once again. Did you miss me? :) OK. I think I have almost ready my liblscp thing, the C API interface to LSCP. Check it out from http://www.rncbc.org/ls . Documention level is still low, but it's getting doxygen'ed :) The server side has been improved, now it can operate in either of two modes, in option: single-threaded multiplex (one thread serves all clients) or multi-threaded (one thread per client). Think that's what came out from our discussions one month ago, Chris? However is the client side that has been most completed, and a working GUI frontend is in the verge. So I guess I'll reply to your questions in this regard, and pose some others in the mean time. > > I'm currently working on this important feature (multiple sampler channels, > allowing each sampler channel using an independent engine, independent > audio output and MIDI input, etc.). But this step means a lot of behavior > decisions, so I wanted to hear the opinions of all of you, how LS should > behave... > > My idea is when you launch LS, there will be no sampler channel at all. You > first have to add a sampler channel, then choose one of the available > sampler engines for the new channel, choose an audio output system (Alsa, > Jack, CoreAudio) for the new channel, choose a MIDI input system (Alsa Seq, > Midi over Jack, CoreMIDI) and finally load an instrument for that channel. > You would have to repeat these steps of course when you add further sampler > channels. Consider these steps to be done with a frontend. Or do you think > we should start with a default number of channels, default sampler channel, > default audio output system, etc. ? If yes, where should these default > values come from? A config file? My proposal goes quite obviously so that LS should behave just like a server, and LSCP should be the native script language for it. Therefore LS should: 1) have a default configuration script file, with the proper LSCP commands to setup all the initial engines, channels and corresponding parameters (eg: "~/.lscprc" for local and/or "/etc/lscp.conf" for global default configuration); 2) accept command line arguments as settings that override the default configuration; a startup configuration script file may be also a given as a command line argument and input from stdin should also be accepted. ; all scripting follows the LSCP syntax uniform. > But we could also move this "default" settings task to the frontend; means > LS will launch as I told above with no channel at all and in your graphical > frontend you will be able to define and load profiles with all your default > settings and the frontend will then send the corresponding LSCP commands to > LS to initialize LS with your default settings (e.g 16 channels, first four > channels with Akai Engine, next seven channels with Gig Engine, using Audio > output system XYZ, etc.). Would that be ok? > The frontend talks to LS via a socket interface, purely as a client. It's my idea that one will have a configuration specific to each client instance, that shall be made persistent across frontend sessions. Any current LS server configuration state is supposed to be translated and saved as a corresponding LSCP script that can be used to restore the same state again in later sessions. Think this must be a trivial feature of any frontend that deserves it's name :) > Another question is how we should continue with the command line interface > to LS. Should we completely drop most command line parameters? Or should we > spend a lot of our time with developing a command line syntax for > everything? We may need to develop a parser for this, the simple getopt C > lib (which we're currently using) won't be sufficient for that. Even if we > decide to completely drop the command line paramers, you will still be able > to control LS completely by script: you could e.g. use tccat to send a LSCP > script to LS (via network / socket). > See above. I think you can keep current command line spec, but adding a new option for an initial configuration lscprc file, but I'll suggest that the current --server option is made always active by default, so dropping it it's probably OK :) > Another issue is that sampler engines will have different amount of audio > channels (e.g. Gig is always stereo, another engine might be only mono, and > another engine might even have arbitrary audio channels (DLS for example), > depending on the currenlty loaded instrument. How should we handle that? > > Should one sampler channel even be controllable by multiple MIDI input > systems simultaniously? Or should we simply use a 1:1 relationship (one > sampler channel has exactly one midi input). > I suppose that depends on the MIDI subsystem you're choosing to, but it comes to mind that I find a more realistic approach that one sampler channel maps to one MIDI channel, and OTOH one MIDI input port may feed several sampler channels. > Again, we're just talking about the user's point of view, how everything > should be controllable, behave, etc. So I hope to hear the opinions of ALL > OF YOU ! Maybe you even have a good idea of a feature whe should consider > at this point. So tell me please! > Now wandering... I was thinking about the GUI frontend in having the option to start the LS server itself, in case it finds that there's none currently active. Of course this only applies for the frontend being in the same local machine as the server (localhost). Nuff said. Cheers. -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: Christian S. <chr...@ep...> - 2004-04-07 22:43:08
|
Hi! I'm currently working on this important feature (multiple sampler channels, allowing each sampler channel using an independent engine, independent audio output and MIDI input, etc.). But this step means a lot of behavior decisions, so I wanted to hear the opinions of all of you, how LS should behave... My idea is when you launch LS, there will be no sampler channel at all. You first have to add a sampler channel, then choose one of the available sampler engines for the new channel, choose an audio output system (Alsa, Jack, CoreAudio) for the new channel, choose a MIDI input system (Alsa Seq, Midi over Jack, CoreMIDI) and finally load an instrument for that channel. You would have to repeat these steps of course when you add further sampler channels. Consider these steps to be done with a frontend. Or do you think we should start with a default number of channels, default sampler channel, default audio output system, etc. ? If yes, where should these default values come from? A config file? But we could also move this "default" settings task to the frontend; means LS will launch as I told above with no channel at all and in your graphical frontend you will be able to define and load profiles with all your default settings and the frontend will then send the corresponding LSCP commands to LS to initialize LS with your default settings (e.g 16 channels, first four channels with Akai Engine, next seven channels with Gig Engine, using Audio output system XYZ, etc.). Would that be ok? Another question is how we should continue with the command line interface to LS. Should we completely drop most command line parameters? Or should we spend a lot of our time with developing a command line syntax for everything? We may need to develop a parser for this, the simple getopt C lib (which we're currently using) won't be sufficient for that. Even if we decide to completely drop the command line paramers, you will still be able to control LS completely by script: you could e.g. use tccat to send a LSCP script to LS (via network / socket). Another issue is that sampler engines will have different amount of audio channels (e.g. Gig is always stereo, another engine might be only mono, and another engine might even have arbitrary audio channels (DLS for example), depending on the currenlty loaded instrument. How should we handle that? Should one sampler channel even be controllable by multiple MIDI input systems simultaniously? Or should we simply use a 1:1 relationship (one sampler channel has exactly one midi input). Again, we're just talking about the user's point of view, how everything should be controllable, behave, etc. So I hope to hear the opinions of ALL OF YOU ! Maybe you even have a good idea of a feature whe should consider at this point. So tell me please! CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-03-30 14:22:41
|
Es geschah am Monday 29 March 2004 00:27 als Vladimir Senkov schrieb:
> After slaking for a few weeks i finally found some time to investigate
> intermittent crashes i was getting . . .
> It seems like they are more easily reproducible with more voices than
> the default of 64.
> Anyway, I think i was able to track it down to
> DiskThread::AskForCreatedStream() returning SLOT_RESERVED Stream * in
> error condition.
> So, I'm attaching proposed patch for that.
diff -u -r linuxsampler/src/diskthread.cpp linuxsampler.vsenkov_mar28_2004/src/diskthread.cpp
--- linuxsampler/src/diskthread.cpp 2004-03-05 08:46:15.000000000 -0500
+++ linuxsampler.vsenkov_mar28_2004/src/diskthread.cpp 2004-03-28 17:12:58.355078656 -0500
@@ -164,9 +164,10 @@
if (pStream && pStream != SLOT_RESERVED) {
dmsg(4,("(yes created)\n"));
pCreatedStreams[StreamOrderID] = NULL; // free the slot for a new order
+ return pStream;
}
- else dmsg(4,("(no not yet created)\n"));
- return pStream;
+ dmsg(4,("(no not yet created)\n"));
+ return NULL;
}
Yes, this is actually a bug in the disk thread. Hope this eliminates our segfault problem.
> I think i had one other crash a while ago but i haven't seen it on the
> latest ls from cvs. If i ever see it again i'll try to track it down.
Ok, good job Vladimir!
CU
Christian
|
|
From: Christian S. <chr...@ep...> - 2004-03-30 13:48:30
|
Changes: * added Envelope Generator 2 and 3 (filter cutoff EG and pitch EG) for accurate .gig playback * fixed accuracy of pitch calculation * changed filter cutoff range to 100Hz..10kHz with exponential curve, this value range can also be adjusted on compile time by setting FILTER_CUTOFF_MIN and FILTER_CUTOFF_MAX in src/voice.h to desired frequencies * src/lfo.h: lfo is now generalized to a C++ template, which will be useful especially when we implement further engines Note: filter code is still disabled by default, you have to explicitly enable it in src/voice.h by setting ENABLE_FILTER to 1, otherwise filter code will be ignored at compile time. CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-03-26 14:18:34
|
Hi! The root path of the LinuxSampler CVS repository has changed to /var/cvs/linuxsampler, thus: cvs -z3 -d:pserver:ano...@cv...:/var/cvs/linuxsampler co linuxsampler CU Christian |
|
From: Marek P. <ma...@na...> - 2004-03-23 21:42:13
|
On Tue, 2004-03-23 at 18:49, Christian Schoenebeck wrote: > Hi! > > I just updated the features section on the LinuxSampler website with the > current state of the CVS version of LS, including all planned features. If you > are missing a feature, let me know! > > http://www.linuxsampler.org/features.html http://www.linuxsampler.org/samplerformat.html is missing. :) Marek > CU > Christian > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
|
From: Christian S. <chr...@ep...> - 2004-03-23 17:53:45
|
Hi! I just updated the features section on the LinuxSampler website with the current state of the CVS version of LS, including all planned features. If you are missing a feature, let me know! http://www.linuxsampler.org/features.html CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-03-21 16:30:37
|
Es geschah am Tuesday 16 March 2004 05:04 als Daniel Macks schrieb: > On Mon, Mar 15, 2004 at 11:43:00PM +0100, Christian Schoenebeck wrote: > > Es geschah am Monday 15 March 2004 05:31 als Daniel Macks schrieb: > > > I just compiled libgig on OS X 10.3.2 (darwin-ppc/7.2.0, gcc 3.3). > > > > Good to hear! Just out of curiosity: does darwin provide Alsa? > > I doubt it (the "l" in Alsa is "Linux":) But technically, Darwin uses > CoreAudio and things built on top of it, not the usual Unix audio > device /dev/audio or OSS soundcard.h. But how did you compile LS? You currently need Alsa headers and library at least for MIDI input when you compile LS. Or does darwin provide such a wrapper library that maps Alsa calls to the CoreAudio lib? CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-03-21 16:19:10
|
Changes: * Implemented all three low frequency oscillators (LFO1 = volume, LFO2 = filter cutoff frequency, LFO3 = pitch) for accurate .gig playback. The only thing left regarding Giga's LFO's is the "LFO sync" feature. CU Christian |
|
From: Mark K. <mar...@co...> - 2004-03-16 18:16:23
|
On Tue, 2004-03-16 at 09:07, Steve Harris wrote: > On Tue, Mar 16, 2004 at 07:49:58 -0800, Mark Knecht wrote: > > Did I take white noise data on all the filter types? Did you somehow > > make the impulse data work out OK? > > I extrapolated from the lowpass data, the bandpass and highpasses are > guesses, but I think they will be pretty close. I also left the code quite > open to tuning to maek it match the GS filters sonically. That needs to be > done by hand really. > > I could guess from the noise recording what kind of filters make GS go > (looks like biquads), and thats the most important thing really. > > - Steve > Hey, if you're happy then I'm happy! Thanks, Mark |
|
From: Steve H. <S.W...@ec...> - 2004-03-16 17:07:29
|
On Tue, Mar 16, 2004 at 07:49:58 -0800, Mark Knecht wrote: > Did I take white noise data on all the filter types? Did you somehow > make the impulse data work out OK? I extrapolated from the lowpass data, the bandpass and highpasses are guesses, but I think they will be pretty close. I also left the code quite open to tuning to maek it match the GS filters sonically. That needs to be done by hand really. I could guess from the noise recording what kind of filters make GS go (looks like biquads), and thats the most important thing really. - Steve |
|
From: Mark K. <mar...@co...> - 2004-03-16 15:50:06
|
On Tue, 2004-03-16 at 06:31, Steve Harris wrote: > > And finally all filter settings have to be fine, ehm coarse tuned, e.g. > > I just have set the filter cutoff freq range to 20Hz..5kHz (linear). > > I /think/ I posted a rough mapping from GS MIDI values to frequency based > on Mark's recordings, but I'm not sure when or what the subject might have > been. > > - Steve > Boy, that was quite awhile ago! In the back of my mind I was not sure we ever really finished the task of taking data. I sort of remember: 1) I got a bunch of impulse data for all the filter types. I recorded this over ADAT using Pro Tools. 2) You had some problems with that data so you suggested I do it by passing white noise through the filter. 3) I *think* I only did that for one filter type as I didn't want to take lots of data again and have it be useless. 4) My mind fell into another state of disrepair and I don't remember anything happening after that. Did I take white noise data on all the filter types? Did you somehow make the impulse data work out OK? Dunno... - Mark |
|
From: Steve H. <S.W...@ec...> - 2004-03-16 14:31:06
|
On Tue, Mar 16, 2004 at 02:55:10 +0100, Christian Schoenebeck wrote: > I decided to disable the filter code by default currently, because the > algorithm currently depends on heavy functions (sin, cos, sinh). > We have to fix that. Steve suggested to replace those standard > libm calls by more efficient ones e.g. from ladspa.h, but I'm still > wondering if we can reconstruct the algorithm by using some > discretisation (numerical solution), because cutoff and resonance > usually changes only slightly. Actually ladspa-util.h - its included with my plugins. I doubt that you'l find you can caluculate coefficient differentials any more efficiently than the discrete values, but I'd like you to prove otherwise :) > And finally all filter settings have to be fine, ehm coarse tuned, e.g. > I just have set the filter cutoff freq range to 20Hz..5kHz (linear). I /think/ I posted a rough mapping from GS MIDI values to frequency based on Mark's recordings, but I'm not sure when or what the subject might have been. - Steve |
|
From: Christian S. <chr...@ep...> - 2004-03-16 14:02:37
|
Changes: * added filters (lowpass, bandpass and highpass), note that filter code is currently disabled by default, you have to explicitly enable it in src/voice.h by setting define ENABLE_FILTER to 1 There are some other defines in voice.h regarding the filter. E.g. you can force the filter always to be used, you can override filter type, override external resonance MIDI controller, override external cutoff frequency MIDI controller and set the frequency with which filter parameters (cutoff / resonance) will be updated I decided to disable the filter code by default currently, because the algorithm currently depends on heavy functions (sin, cos, sinh). We have to fix that. Steve suggested to replace those standard libm calls by more efficient ones e.g. from ladspa.h, but I'm still wondering if we can reconstruct the algorithm by using some discretisation (numerical solution), because cutoff and resonance usually changes only slightly. And finally all filter settings have to be fine, ehm coarse tuned, e.g. I just have set the filter cutoff freq range to 20Hz..5kHz (linear). * src/eg_vca.cpp: Decay_1 stage now using exponential curve http://www.linuxsampler.org/doc/engines/gig/eg1.pdf http://www.linuxsampler.org/doc/engines/gig/eg1.scd CU Christian |
|
From: Daniel M. <dm...@ne...> - 2004-03-16 04:04:52
|
On Mon, Mar 15, 2004 at 11:43:00PM +0100, Christian Schoenebeck wrote: > Es geschah am Monday 15 March 2004 05:31 als Daniel Macks schrieb: > > I just compiled libgig on OS X 10.3.2 (darwin-ppc/7.2.0, gcc 3.3). > > Good to hear! Just out of curiosity: does darwin provide Alsa? I doubt it (the "l" in Alsa is "Linux":) But technically, Darwin uses CoreAudio and things built on top of it, not the usual Unix audio device /dev/audio or OSS soundcard.h. > > Two things: > > > > First, even when I './configure --enable-shared=yes' and the output > > says: > > > > checking whether to build shared libraries... yes > > > > only static (.a) libraries are produced. There's no .la file produced > > either, suggesting libtool isn't getting called? > > Sorry, I don't have much time currently. I hope to find some time for your > problem tomorrow. Which part of LS do you want to compile / link dynamically > and why? Not urgent at all. I just made the libgig library available via the Fink package manager and noticed these things in the process. If one were writing a program to use libgig, having a dynamic library means one doesn't have to recompile the program every time one recompiles libgig, and it makes these program binaries smaller since each does not need to include the whole libgig code. Autoconf (and related programs) make it easy to set this up in a platform-independent manner. > > Second, why doesn't 'make install' install the .h headers? Having a .a > > library is pretty useless without a defined interface. > > Sorry I haven't taken care yet about placing the header files correctly. Many > are just placed in noinst. But thats trivial and quick to fix. > > Perhaps somebody else can help you in the meantime... As I said, no hurry. I just manually install the .h. And unix compilers don't care what kind of lib is available (as long as *some* kind of library is available:). dan -- Daniel Macks dm...@ne... http://www.netspace.org/~dmacks |
|
From: Christian S. <chr...@ep...> - 2004-03-15 23:00:42
|
Es geschah am Monday 15 March 2004 23:18 als Christian Schoenebeck schrieb: > LinuxSampler, just replace "co linuxsampler" by "co libgig". Unfortunately > cvs.linuxsampler.org is currently down due to construction works, but I Correcting: cvs.linuxsampler.org is available again CU Christian |