You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(7) |
Dec
|
|
From: Robert J. <rob...@da...> - 2003-11-12 21:32:03
|
Hi, >=20 > The solution proposed by Robert is IMHO too messy because it requires > lots of inputs and doing it yourself is a bit of a mess and you risk > to get not the same good trigger to MIDI-velocity quality and trigger > speed=0D as with this device. I'm confused, which solution have I proposed that is risky? The usb-midi st= uff=20 perhaps? >=20 > Robert, start buying the hardware pieces, the LS site is waiting for > the pics, videos and audio demos you promised :-) >=20 > PS: To Robert, Mark and others, when you press reply all on your mail > application please remove the name of the sender (in particular mine :-) Duly noted. Actually, most of the time I leave the name there on purpose...= I=20 guess that doesn't serve much of a purpose... /Robert > )=0D because the sender always gets two mails if you don't do that. > Some say the reply-to settings can be set on SF.net but I do not have > the LS SF.net account's password handy. >=20 > cheers, > Benno >=20 >=20 >=20 > ------------------------------------------------- > This mail sent through http://www.gardena.net >=20 >=20 > ------------------------------------------------------- > This SF.Net email sponsored by: ApacheCon 2003, > 16-19 November in Las Vegas. Learn firsthand the latest > developments in Apache, PHP, Perl, XML, Java, MySQL, > WebDAV, and more! http://www.apachecon.com/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: <be...@ga...> - 2003-11-12 17:56:12
|
Scrive Robert Jonsson <rob...@da...>: > > > This signal is connected to what normally is called the 'brain'. A > > > bunch of analogue inputs and a sampleplayer engine. > > > Normally you use the brains built in sounds, they do however > > > (normally) suck :-/... atleast for the cheaper brains. > > > The brain does also have midi-out, this is what we where going to > > > try and utilize for this "experiment". > > > > Are there any sanely priced "brains" without some crappy sampleplayer? > > Just a basic trigger->MIDI converter... > > I actually have no idea, I'll ask my brother... I think this is what you are looking for: Roland TMC6 Trigger to MIDI Converter, 6 drumpad trigger inputs, one midi out http://www.zzounds.com/item--ROLTMC6 Cost: $239 The solution proposed by Robert is IMHO too messy because it requires lots of inputs and doing it yourself is a bit of a mess and you risk to get not the same good trigger to MIDI-velocity quality and trigger speed as with this device. Robert, start buying the hardware pieces, the LS site is waiting for the pics, videos and audio demos you promised :-) PS: To Robert, Mark and others, when you press reply all on your mail application please remove the name of the sender (in particular mine :-) ) because the sender always gets two mails if you don't do that. Some say the reply-to settings can be set on SF.net but I do not have the LS SF.net account's password handy. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Mark K. <mar...@co...> - 2003-11-12 15:55:39
|
On Wed, 2003-11-12 at 05:16, David Olofson wrote: > On Wednesday 12 November 2003 13.52, Robert Jonsson wrote: > [...] > > This signal is connected to what normally is called the 'brain'. A > > bunch of analogue inputs and a sampleplayer engine. > > Normally you use the brains built in sounds, they do however > > (normally) suck :-/... atleast for the cheaper brains. > > The brain does also have midi-out, this is what we where going to > > try and utilize for this "experiment". > > Are there any sanely priced "brains" without some crappy sampleplayer? > Just a basic trigger->MIDI converter... > > I bought my Alesis DM-5 a couple of years ago on Ebay for about $200. It has 12 drum pad inputs and produces MIDI output. It has a lot of built in sounds which I seldom use. I got a used 6-pad drumKat, an older funky square design, for about $150 I think. One of the guys on the Pro Tools site at DigiDesign offered it to me privately. It's a gas to play around with this stuff, and even though I'm no drummer I can usually get much better patterns this way than programming MIDI by hand in Pro Tools. The down side is I've got these big cables laying around whenever I hook this stuff up, so it's ugly and I trip a lot, but it's fun. - Mark |
|
From: Robert J. <rob...@da...> - 2003-11-12 13:28:18
|
Wednesday 12 November 2003 14.16 skrev David Olofson: > On Wednesday 12 November 2003 13.52, Robert Jonsson wrote: > [...] > > > This signal is connected to what normally is called the 'brain'. A > > bunch of analogue inputs and a sampleplayer engine. > > Normally you use the brains built in sounds, they do however > > (normally) suck :-/... atleast for the cheaper brains. > > The brain does also have midi-out, this is what we where going to > > try and utilize for this "experiment". > > Are there any sanely priced "brains" without some crappy sampleplayer? > Just a basic trigger->MIDI converter... I actually have no idea, I'll ask my brother... > > > Another way would be to use a soundcard with multiple inputs and > > write a trigger-application, but size and price would go up quite a > > lot :) > > OTOH, do you really need full audio quality ADCs for this? Midi has > only 7 bits of resolution for velocity, and even 8 bit ADC can > deliver a lot more than that when analysing an audio rate signal. > Also, a sample rate that's sufficient to achieve the desired > transient response time should be enough - which probably means > significantly less than 48 kHz. > > Not sure about how much and what kind of information is useful, but I > suspect one might get away with even lower sample rates if part of > the job is done in the analog domain. Half-wave rectifier (or full > wave, for faster and more accurate response - but that's 3 more > diodes! ;-) + simple RC LPF, followed by an analog MUX and a single > ADC? You could easily drive that off the parallel port. (Or the ISA > bus, if you manage to find one these days. ;-) Though true all of the above, it's significantly more advanced than what we where considering at the moment, version 3.0 can perhaps be to totally omit the brain, preferably by hardwiring an ISA bus (muhahahaha), or not. :) /Robert > > > //David Olofson - Programmer, Composer, Open Source Advocate > > .- Audiality -----------------------------------------------. > > | Free/Open Source audio engine for games and multimedia. | > | MIDI, modular synthesis, real time effects, scripting,... | > > `-----------------------------------> http://audiality.org -' > --- http://olofson.net --- http://www.reologica.se --- > > > > ------------------------------------------------------- > This SF.Net email sponsored by: ApacheCon 2003, > 16-19 November in Las Vegas. Learn firsthand the latest > developments in Apache, PHP, Perl, XML, Java, MySQL, > WebDAV, and more! http://www.apachecon.com/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: David O. <da...@ol...> - 2003-11-12 13:16:23
|
On Wednesday 12 November 2003 13.52, Robert Jonsson wrote: [...] > This signal is connected to what normally is called the 'brain'. A > bunch of analogue inputs and a sampleplayer engine. > Normally you use the brains built in sounds, they do however > (normally) suck :-/... atleast for the cheaper brains. > The brain does also have midi-out, this is what we where going to > try and utilize for this "experiment". Are there any sanely priced "brains" without some crappy sampleplayer?=20 Just a basic trigger->MIDI converter... > Another way would be to use a soundcard with multiple inputs and > write a trigger-application, but size and price would go up quite a > lot :) OTOH, do you really need full audio quality ADCs for this? Midi has=20 only 7 bits of resolution for velocity, and even 8 bit ADC can=20 deliver a lot more than that when analysing an audio rate signal.=20 Also, a sample rate that's sufficient to achieve the desired=20 transient response time should be enough - which probably means=20 significantly less than 48 kHz. Not sure about how much and what kind of information is useful, but I=20 suspect one might get away with even lower sample rates if part of=20 the job is done in the analog domain. Half-wave rectifier (or full=20 wave, for faster and more accurate response - but that's 3 more=20 diodes! ;-) + simple RC LPF, followed by an analog MUX and a single=20 ADC? You could easily drive that off the parallel port. (Or the ISA=20 bus, if you manage to find one these days. ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-11-12 12:59:20
|
On Wednesday 12 November 2003 12.33, be...@ga... wrote: > A question to Robert (and others): > you said you want to build this LS drumkit expander. > Nice, but the question is: what's the price of MIDI drumpads ? > Does it pay off to buy MIDI drumpads (how much do they cost?) If it pays depends on whether you need a MIDI kit or not. ;-) > compared to traditional analogue drumkits ? You mean acoustic, or pads with analogue trigger outputs? :-) > What are the advantages of using a MIDI drumkit ? Wider choice of sounds, including totally weird ones. Easy tuning. (No=20 screws; unlimited range, settings can be stored and recalled etc.)=20 Doesn't change tuning and sound as humidity and temperature changes.=20 Superior to keyboards for recording percussion into a MIDI sequencer.=20 You can practice all night without driving the whole block crazy. :-) You mentioned some other possibilites. There are probably more - and=20 some disadvantages, as well. Obviously, you need power, and to play=20 in a band (even in very small places), you need a power amplifier and=20 speakers. > Price ? I've seen kits in the $1500-$2500 range, but I haven't looked that=20 carefully... I'm actually quite interested in getting one, since recording drums on=20 the keyboard is boring and doesn't feel right. Hammer action actually=20 seems to make it worse. (You press piano keys, whereas you hit=20 percussive instruments. Hammer action keys just hurt your fingers if=20 you try to hit them, and "pressing" drums feels a bit odd. :-) Oh well... I don't have the space or the cash right now, but it's=20 never to early to start planning. ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Robert J. <rob...@da...> - 2003-11-12 12:55:23
|
Yo, Thanks for the links. I'll take a peek.=20 What was especially interesting with the mini-box was however that it was a= =20 complete solution and in a very small box :)... but if it's not good enough= ,=20 it's not good enough... Wednesday 12 November 2003 12.33 skrev be...@ga...: > A question to Robert (and others): > you said you want to build this LS drumkit expander. > Nice, but the question is: what's the price of MIDI drumpads ? > > Does it pay off to buy MIDI drumpads (how much do they cost?) compared to > traditional analogue drumkits ? My brother is the expert so I might be wrong. But I think it works like thi= s: The pads are all analogue. They just emit a signal when you poke at them.=20 Atleast in theory you could use any kind of microphone, trigger microphones= =20 are ofcourse the best choice. This signal is connected to what normally is called the 'brain'. A bunch of= =20 analogue inputs and a sampleplayer engine.=20 Normally you use the brains built in sounds, they do however (normally)=20 suck :-/... atleast for the cheaper brains.=20 The brain does also have midi-out, this is what we where going to try and=20 utilize for this "experiment".=20 Another way would be to use a soundcard with multiple inputs and write a=20 trigger-application, but size and price would go up quite a lot :) > I'm not an expert here, so I might talking nonsense that's why I'm asking > :-) =0D=20 > What are the advantages of using a MIDI drumkit ? As mentioned above, the word MIDI isn't really right here... MIDI is an=20 important option though. When you have a real drumkit you have lots of drawbacks, loud, hard to=20 amplify, and so on... when they sound right they SOUND RIGHT though :) An artificial drumkit is much easier to handle, we use it to play live.=20 Amplification does not require any extra work, easy to handle, almost nothi= ng=20 needs to be calibrated. It's a bit "plastic" though, sometimes that _is_ wh= at=20 you want though ;). Sometimes the sounds just suck... but there are "canned= "=20 drumkits, like the reknowned Drumkit-From-Hell, that is the option we are=20 thinking of right now. > Price ? In the same range as a normal drumkit...starting a bit higher I suppose. say... 1000 EUR -> 10000 EUR. > Easy switching of sounds (eg switching from a rock drumkit to a jazz > drumkit,=0D to an electronic drumkit etc), possibility of recording the > events ? Possibility to use the sampler module to provide not only drumkit > voices but an additional instrument like eg. piano too. (given enough CPU > power) ?=20 Yes those are definitively benefits, and they are important reasons for=20 chosing such a system, and definitely something we will experiment with :). > Just curious :-) >=20 > If you build the LS expander we want pics/videos/audio demos of a band > playing=0D it live to put on the LS site :-) Sure thing :) /Robert |
|
From: David O. <da...@ol...> - 2003-11-12 12:34:10
|
On Wednesday 12 November 2003 12.09, be...@ga... wrote: [...] > > > > Though we where wondering how RT capable the platform was. If > > > > it's a mini-itx board you are using I guess that it works :) > > > > > > There's no particular reason to suspect problems with the > > > architecture, and I've seen reports of VIA Eden working well > > > with RTAI... > > > > Great. > > No need for RT Linux etc. I have a VIA C3 CPU (although in a > different kind of mainboard, not soldered on the board) and the > latency with a cmedia PCI (with the right lowlatency kernel) is as > low as 3msec. (It's where I currently test LS :-) ). Meanwhile, some mainboards come with f*cked up BIOSos and/or chipset=20 design mistakes that cause latencies of tens or even hundreds of ms=20 *even* with RTLinux or RTAI. (I have one in my prototype rheometer -=20 though I can get around it by disabling BIOS text mode emulation and=20 power management.) My point is just that if a system runs fine with RTAI and/or RTLinux,=20 one can conclude that there are no hardware issues that are likely to=20 cause trouble with any other real time solutions, including Linux +=20 preempt and/or lowlatency. If it *doesn't* run well even with a=20 "real" RTOS, you're screwed, regardless of OS. [...] > > The problem with the mini-box solution is that there is no > > possibility of connecting any other soundcard, it won't fit in > > the box ;) > > See my solution :-) I would really rather have a 1U full width rack mount box - and they=20 have one of those as well: =09http://linitx.com/products/1u_case/ But what about noise levels? Do you actually need those fans,=20 considering that the Mini-boxes have no fans at all, except the CPU=20 fan on 800+ MHz models? Why not replace the CPU fan with a heatpipe +=20 passive radiator solution? (There should be room for one, I think...) Actually, for a sound module, I'd really rather prefer a full width=20 case. More space (most importantly; room for a PCI riser and a proper=20 audio interface), and I don't have any other half width units to pair=20 a Mini-box with anyway. [...] > Yes the case is a bit > more expensive than a traditional case but it's the price you pay > for reduced size, fanless power supply unit, the C3 is a bit slow > but I think it is more than enough for your purpose. I don't mind the prices (those cases are actually quite inexpensive=20 for this kind of stuff) - but I do mind the noise levels of all rack=20 mount cases I've found so far. (Both cases and pre-built systems.)=20 Insane. If you put them in a server room (which is what they're built=20 for), you'll have to sound-proof the server room to avoid annoying=20 people nearby... ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: <be...@ga...> - 2003-11-12 11:34:12
|
A question to Robert (and others): you said you want to build this LS drumkit expander. Nice, but the question is: what's the price of MIDI drumpads ? Does it pay off to buy MIDI drumpads (how much do they cost?) compared to traditional analogue drumkits ? I'm not an expert here, so I might talking nonsense that's why I'm asking :-) What are the advantages of using a MIDI drumkit ? Price ? Easy switching of sounds (eg switching from a rock drumkit to a jazz drumkit, to an electronic drumkit etc), possibility of recording the events ? Possibility to use the sampler module to provide not only drumkit voices but an additional instrument like eg. piano too. (given enough CPU power) ? Just curious :-) If you build the LS expander we want pics/videos/audio demos of a band playing it live to put on the LS site :-) cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: <be...@ga...> - 2003-11-12 11:09:52
|
Hi, yesterday I committed sustain pedal support in CVS (notes simply add up so you can max out polyphony if you keep hitting the same key with the sustain pedal pressed). If you max out the polyphony a crash is almost certain (due some missing checks in the LS code, probably Christian's fault :-) ). Now on the console you see a voice and stream counter that is updated in real time. Below responding to Robert's C3 questions: Scrive Robert Jonsson <rob...@da...>: > > The use case is actually very specific in our case. To build a sound module > to > connect to the brain of a digital drumkit. > MiniBox+LS+Drumkit_from_hell is the most likely combination at the moment. > > That would be cool. For mini-itx hardware I suggest you to use the cases from Cubid: in particular the Cubid 2699R http://linitx.com/shop/product_info.php/cPath/8/products_id/155 83 EUR+ VAT The fastest VIA C3 Mainboard you can get is the 1GHz Nehemiah use this and not the slower ones because even the 1GHz is slow, so a 600-800MHz box will probably be too slow to achieve decent polyphony http://linitx.com/shop/product_info.php/cPath/12/products_id/207 137 EUR + VAT With these two things all you need is a DDR266+ DIMM RAM Module (the mainboard has only one RAM slot so chose the right size, I would say go with the 512MB to allow loading large samples since RAM is not that expensive these days). (512MB about 100 EUR) and a standard IDE HD. (I'd suggest a Maxtor :-) ) The cool thing of the Cubid case is that it has room for 1 PCI expansion card this means you can use a different audio card if the internal VIA audio sucks too much (it does indeed :-) ). I'd suggest you a cheap C-Media PCI card with dual stereo out and builtin MPU MIDI port (so for your MIDI-in you just need a cable, no USB MIDI in needed). Or alternatively use a card 24bit card like those from Terratec, about 100 EUR AFAIK * for example according to the alsa soundcards page http://www.alsa-project.org/alsa-doc/ the Terratec Aureon 7.1 Space supports: Aureon 7.1 Space The ultimate quality in sound * 8 (7.1) separate Speaker Outputs * Uncompromising 24 Bit / 96 kHz Playback and Recording * Digital In/Out, Optical 24 Bit / 96 kHz * A3D, EAX 1.0 and EAX 2.0 Support http://productsen.terratec.net/modules.php?op=modload&name=News&file=article&sid=149&mode=thread&order=0&thold=0 based on the Envy24HT Not sure if this card works well but I have M-Audio Delta cards based on the envy24 and the alsa support is excellent. (low latency works well). > > > > > > Though we where wondering how RT capable the platform was. If it's > > > a mini-itx board you are using I guess that it works :) > > > > There's no particular reason to suspect problems with the > > architecture, and I've seen reports of VIA Eden working well with > > RTAI... > Great. No need for RT Linux etc. I have a VIA C3 CPU (although in a different kind of mainboard, not soldered on the board) and the latency with a cmedia PCI (with the right lowlatency kernel) is as low as 3msec. (It's where I currently test LS :-) ). > > > > What about the built-in soundcard, know anything about it? > > > > It's an AC'97 chip with 18 bit full duplex I/O, but I can't seem to > > find any detailed specs. Anyway, the actual quality depends a great > > deal on the analog cirquitry on the board, > > Yes, I think the sound quality would be good enough at the moment. I was more > > thinking in the lines of RT capabilities. I've heard discouraging stories of > > badly functioning ac97 cards/drivers with ALSA. Put in a PCI soundcard as described above and forget about bad VIA audio problems (and solve the MIDI in problem too). > > The problem with the mini-box solution is that there is no possibility of > connecting any other soundcard, it won't fit in the box ;) See my solution :-) > > USB is a possibility, but I'd rather not go there... MIDI would have to be > handled that way (I'm hoping the latency for that isn't too bad) it would not > > be good for lateny or reliability to connect audio there also... Better avoid USB if you want good note-on MIDI response. See above :-) > > > unless you use the digital > > I/O and external converters. > Yes, that's true, a way out if the analog-outs suck. :) BTW: I already ordered stuff from linitx.com: fast delivery, good prices, good selection for VIA embedded stuff (for example they sell cheap compact flash to IDE adapters so you can put in a CF card and boot off that card like it were an IDE disk). At the ISP I work for we ordered a few (Nehemiah C3 + Cubid 2699R cases) because we are building some special network appliances for internal Hotel TVs. I think your LS drumkit expander would be quite a nice case study to show the general audience that LS can actually be used in a professional and live enviroment (of course when it will be stable and somewhat finished, not this developer version). Robert, as you have seen the above prices are not high compared to traditional hardware. (mainboards, CPUs). Yes the case is a bit more expensive than a traditional case but it's the price you pay for reduced size, fanless power supply unit, the C3 is a bit slow but I think it is more than enough for your purpose. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Robert J. <rob...@da...> - 2003-11-10 16:13:28
|
> > I've been thinking about doing the same, although with Audiality and > perhaps FluidSynth. What I'm after is basically a compact, modular > and affordable solution that still gives me full control, including > support for custom code. The use case is actually very specific in our case. To build a sound module to connect to the brain of a digital drumkit. MiniBox+LS+Drumkit_from_hell is the most likely combination at the moment. > > Oh, and it would be cool if I can hook it up to the TV and use it as a > gaming console as well. ;-) :) > > > Though we where wondering how RT capable the platform was. If it's > > a mini-itx board you are using I guess that it works :) > > There's no particular reason to suspect problems with the > architecture, and I've seen reports of VIA Eden working well with > RTAI... Great. > > What about the built-in soundcard, know anything about it? > > It's an AC'97 chip with 18 bit full duplex I/O, but I can't seem to > find any detailed specs. Anyway, the actual quality depends a great > deal on the analog cirquitry on the board, Yes, I think the sound quality would be good enough at the moment. I was more thinking in the lines of RT capabilities. I've heard discouraging stories of badly functioning ac97 cards/drivers with ALSA. The problem with the mini-box solution is that there is no possibility of connecting any other soundcard, it won't fit in the box ;) USB is a possibility, but I'd rather not go there... MIDI would have to be handled that way (I'm hoping the latency for that isn't too bad) it would not be good for lateny or reliability to connect audio there also... > unless you use the digital > I/O and external converters. Yes, that's true, a way out if the analog-outs suck. :) /Robert |
|
From: David O. <da...@ol...> - 2003-11-10 14:54:31
|
On Monday 10 November 2003 09.48, Robert Jonsson wrote: > Hey Benno, > > I noticed that you where using one of those. Is that a mini-itx > board in anycase? Dunno what Benno is using, but I have yet to see C3's or C3 mainboards=20 sold separately... That said, I've hardly even been keeping track of=20 Intel and AMD CPUs lately. > Me and my brother have been thinking about making a soundmodule > with LinuxSampler and a mini-itx board. I've been thinking about doing the same, although with Audiality and=20 perhaps FluidSynth. What I'm after is basically a compact, modular=20 and affordable solution that still gives me full control, including=20 support for custom code. Oh, and it would be cool if I can hook it up to the TV and use it as a=20 gaming console as well. ;-) > Though we where wondering how RT capable the platform was. If it's > a mini-itx board you are using I guess that it works :) There's no particular reason to suspect problems with the=20 architecture, and I've seen reports of VIA Eden working well with=20 RTAI... > What about the built-in soundcard, know anything about it? It's an AC'97 chip with 18 bit full duplex I/O, but I can't seem to=20 find any detailed specs. Anyway, the actual quality depends a great=20 deal on the analog cirquitry on the board, unless you use the digital=20 I/O and external converters. > Thinking about using one of those: http://www.mini-box.com/m100.htm That's *very* interesting stuff! Thanks for the link. :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Robert J. <rob...@da...> - 2003-11-10 08:50:08
|
Hey Benno, I noticed that you where using one of those. Is that a mini-itx board in anycase? Me and my brother have been thinking about making a soundmodule with LinuxSampler and a mini-itx board. Though we where wondering how RT capable the platform was. If it's a mini-itx board you are using I guess that it works :) What about the built-in soundcard, know anything about it? Thinking about using one of those: http://www.mini-box.com/m100.htm /Robert |
|
From: <be...@ga...> - 2003-11-10 00:13:36
|
Scrive Mark Knecht <mar...@co...>: > Benno, > You play quite well! ;-) yes with the help of pmidi and this file: http://www.pgmusic.com/p021.mid I'm ineed a good piano player :-) > > So, how high did did the voice count get during this recording? The max I've seen in this midi file is stereo 32 voices (in GSt terms 2*32=64 voices). But when we will implement envelopes the number of voices will go a bit higher due to notes needing longer to fade out. > My sense is that the engine is acting quite nicely. thanks :-) On my VIA C3 (1Ghz with crappy FPU) playing the piece takes about 15% of CPU with short peaks of 25%. I guess an athlon 2000 uses about 1/3 or 1/4 of the CPU (due to faster FPU and higher clock). >One problem you hear in > GSt at times is that it will jam up a note or two, either delaying them, > or sustaining them for longer periods of time. I don't hear that very > much if at all. (Although it's sort of hard to tell on some of the very > fastest parts, like around 39-52 seconds. > > There are a couple of points where the audio in the ogg stops. (like > at 55 seconds) What caused that, or do you know? That's because the short pause in the MID file and the missing release envelope thus the note stops abruptly. No error here. > > I'll try out the new version when you've committed it. > > Great work! Thanks it's the force of the team that makes the difference :-) Anyway LS still has some bugs eg use up all streams and it will crash, sometimes a voice's stream is not played thus the note gets cut off after the RAM part is played. But we will iron them out. Let's see what we agree on about the event system so that we can implement both the event system and the enveloping stuff in one rush. :-) Regarding the event system I agree with David that it makes sense to queue only events for the current fragment. This eases things and is used by the majority of plugin standards so if LS will get a VSTi or AudioUnit interface in future (never say never, how know on what all kind of beasts LS will run on :-) ) then it will be easy to adapt it to that kind of framework. cheers, Benno http://www.linuxsampler.org > > Cheers, > Mark > > On Sat, 2003-11-08 at 09:28, be...@ga... wrote: > > Hi, > > > > Sustain pedal support (voices add upp) is implemented, but > > envelopes, and pedal down samples are still missing. > > You hear clicks, the sound is a bit undynamic but it's already showing > > what LS is capable of. :) > > > > The gig gile is a 900MB (compressed, 1.5GB uncompressed) piano sample, > > 3 layers peal up, 3 layers pedal down (not used yet). > > > > here is the audio file: (3MB) > > http://www.linuxsampler.org/examplesustain.ogg > > > > Will commit the sustain stuff to CVS tomorrow on Monday. > > > > cheers, > > Benno > > > > > > > > > > ------------------------------------------------- > > This mail sent through http://www.gardena.net > > > > > > ------------------------------------------------------- > > This SF.Net email sponsored by: ApacheCon 2003, > > 16-19 November in Las Vegas. Learn firsthand the latest > > developments in Apache, PHP, Perl, XML, Java, MySQL, > > WebDAV, and more! http://www.apachecon.com/ > > _______________________________________________ > > Linuxsampler-devel mailing list > > Lin...@li... > > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > > > > ------------------------------------------------------- > This SF.Net email sponsored by: ApacheCon 2003, > 16-19 November in Las Vegas. Learn firsthand the latest > developments in Apache, PHP, Perl, XML, Java, MySQL, > WebDAV, and more! http://www.apachecon.com/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: David O. <da...@ol...> - 2003-11-09 17:52:16
|
On Sunday 09 November 2003 01.31, Christian Schoenebeck wrote:
[...]
> Ok, for things like EGs where undeterministic factors are involved
> I share your opinion better not to let them prequeue their events,
> but I'm thinking about the sequencer scenario where the sequencer
> might want to prequeue events somewhere past the scrope of the
> current frame. I'm not sure if there's already something like that
> (sequencer / protocol). Don't you think that would be a pro for
> not-frame-relative events?
Well, in the case of Audiality, normal plugins just won't care about=20
input events past the end of the buffer cycle, so you *could*=20
prequeue without causing any trouble, that far.
(Actually, there is a minor issue with big gaps and wrapping=20
timestamps, but that's just because I use 16 bit wrapping timestamps.=20
No need for more, as long as the "one buffer at a time" rule is=20
obeyed.)
However, if you really want the prequeueing to be of any real use, you=20
have to allow some plugins to look ahead on the input as well - or=20
you're still limited by the same issue as always: You do not know=20
about the future (which is defined by future input events), and thus,=20
you cannot prequeue output events.
For example, if you let the sequencer prequeue, you've effectively=20
forwarded the timing part of sequencing to whatever receives the=20
events, instead of just processing one buffer cycle at a time. (Note=20
that this can potentially mean that the sequencer causes CPU load=20
peaks potential event pool drain, when it occasionally prequeues new=20
events!) That *could* perhaps allow EGs to save some cycles by not=20
running every buffer cycle - but then the EGs have to *know* whether=20
or not they're allowed to work outside the current buffer cycle, or=20
they will not function properly with "live" input.
> > (In Audiality, I do have a sorting "insert" operation, though
> > it's not used, and probably never will be. XAP wasn't even meant
> > to have such a thing. Events are always required to be sent in
> > timestamp order.
>
> But what if you're using an UDP based protocol? That might mix the
> events.
That's something the "gateway" to the outside world will have to deal=20
with. This kind of event system is based on the idea that events are=20
just a form of structured audio rate data, which differs quite a bit=20
from QNX style message passing, UDP, GUI toolkit event systems and=20
the like.
Basically, if you can maintain a fixed rate audio stream, you can also=20
transmit an Audiality/XAP style event stream. If you have drop-outs=20
and similar issues, both audio and event streams will need to be=20
"repaired" one way or another before going into a plugin net.
This may seem restrictive if you consider only plain events, but if=20
you consider ramp events, you'll realize that anything else would=20
make life quite a bit harder for event receivers. Overlapping ramp=20
sections, gaps etc, and you'd have to take special measures to avoid=20
clicks, that are not necessary otherwise. An extra layer of zipper=20
noise protection, that belongs in the soft/hard RT gateway; not=20
inside every plugin.
> > The addressing and routing system ensures that multiple event
> > streams to the same input get sort/merged properly, and without
> > cost for 1->1 connections.)
>
> You mean no need for sorting events that are dedicated for
> different purposes anyway, right?
Actually, it doesn't matter what the events are for, since they still=20
have to be in timestamp order if they go to the same physical input=20
queue. Decoding is normally driven by the event stream, like this:
=09while(samples_left_to_process())
=09{
=09=09while(time_to_next_event() =3D=3D 0)
=09=09{
=09=09=09process_event();
=09=09}
=09=09samples =3D time_to_next_event();
=09=09if(samples > samples_left_to_process())
=09=09=09samples =3D samples_left_to_process();
=09=09process(samples);
=09}
> Would it make sense to have
> individual queues for some special purposes (to avoid mixing things
> and thus reduce time complexity)?
Definitely, and that was designed into the XAP event system, be means=20
of "context IDs" of control ins and outs.
A plugin with multiple "inner" loops (like a mixer processing one=20
channel at a time) would have separate event queues (thus controls=20
with different context IDs), and the only ordering that matters is=20
that within a queue. No sort/merging needed if there's one sender (ie=20
one loop generating ordered events) for each queue, or if one sender=20
sends to multiple queues.
A host detects the need for a sort/merge object by checking for=20
multiple output contexts sending to a single input context. That is,=20
sort/merge is *only* done if two "inner loops" are sending events to=20
the same physical event queue.
A plugin developer can organize things any way he/she sees fit, and=20
then just slap context IDs onto controls according to their relations=20
to the internal loops of the plugin.
> > Now, those of you who actually read all the way here are probably
> > at least as obsessed with event systems as I am. We should all
> > consider getting a life! ;-)
>
> Just a matter of multi tasking... [switch]
Still, there are only so many cycles in a day. :-)
> > `-----------------------------------> http://audiality.org -'
> > --- http://olofson.net --- http://www.reologica.se ---
>
> I like the Requirements for Audiality ("Reasonably new C compiler",
> "An operating system", "Sound card with drivers"). :)
Keeps the volume of email from people trying to compile Kobo Deluxe=20
(which uses Audiality) really rather low. There was some gcc 3.x type=20
casting issue somewhere, and I discovered that you need a macro to=20
copy a va_list on PPC, but that's about it. :-)
//David Olofson - Programmer, Composer, Open Source Advocate
=2E- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
|
|
From: Christian S. <chr...@ep...> - 2003-11-09 00:31:37
|
Es geschah am Freitag, 7. November 2003 22:26 als David Olofson schrieb: > Anyway, there's one important design decision I'm thinking about: > > To queue or not to queue... :-) > > > In Audiality, I'm never queueing events beyond the end of the current > audio buffer. The obvious disadvantage is that envelope generators > and the like cannot be "smart" and prequeue events and then go to > sleep - but OTOH, with prequeueing, input events may force prequeued > events to be taken back or modified. Not doing any prequeueing keeps > things simple, since event queues *are* actually simple queues, as > opposed to random access sequencers. Ok, for things like EGs where undeterministic factors are involved I share your opinion better not to let them prequeue their events, but I'm thinking about the sequencer scenario where the sequencer might want to prequeue events somewhere past the scrope of the current frame. I'm not sure if there's already something like that (sequencer / protocol). Don't you think that would be a pro for not-frame-relative events? > (In Audiality, I do have a sorting "insert" operation, though it's not > used, and probably never will be. XAP wasn't even meant to have such > a thing. Events are always required to be sent in timestamp order. But what if you're using an UDP based protocol? That might mix the events. > The addressing and routing system ensures that multiple event streams > to the same input get sort/merged properly, and without cost for 1->1 > connections.) You mean no need for sorting events that are dedicated for different purposes anyway, right? Would it make sense to have individual queues for some special purposes (to avoid mixing things and thus reduce time complexity)? > Now, those of you who actually read all the way here are probably at > least as obsessed with event systems as I am. We should all consider > getting a life! ;-) Just a matter of multi tasking... [switch] > `-----------------------------------> http://audiality.org -' > --- http://olofson.net --- http://www.reologica.se --- I like the Requirements for Audiality ("Reasonably new C compiler", "An operating system", "Sound card with drivers"). :) CU Christian |
|
From: Mark K. <mar...@co...> - 2003-11-08 19:44:20
|
Benno, You play quite well! ;-) So, how high did did the voice count get during this recording? My sense is that the engine is acting quite nicely. One problem you hear in GSt at times is that it will jam up a note or two, either delaying them, or sustaining them for longer periods of time. I don't hear that very much if at all. (Although it's sort of hard to tell on some of the very fastest parts, like around 39-52 seconds. There are a couple of points where the audio in the ogg stops. (like at 55 seconds) What caused that, or do you know? I'll try out the new version when you've committed it. Great work! Cheers, Mark On Sat, 2003-11-08 at 09:28, be...@ga... wrote: > Hi, > > Sustain pedal support (voices add upp) is implemented, but > envelopes, and pedal down samples are still missing. > You hear clicks, the sound is a bit undynamic but it's already showing > what LS is capable of. :) > > The gig gile is a 900MB (compressed, 1.5GB uncompressed) piano sample, > 3 layers peal up, 3 layers pedal down (not used yet). > > here is the audio file: (3MB) > http://www.linuxsampler.org/examplesustain.ogg > > Will commit the sustain stuff to CVS tomorrow on Monday. > > cheers, > Benno > > > > > ------------------------------------------------- > This mail sent through http://www.gardena.net > > > ------------------------------------------------------- > This SF.Net email sponsored by: ApacheCon 2003, > 16-19 November in Las Vegas. Learn firsthand the latest > developments in Apache, PHP, Perl, XML, Java, MySQL, > WebDAV, and more! http://www.apachecon.com/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: <be...@ga...> - 2003-11-08 17:28:15
|
Hi, Sustain pedal support (voices add upp) is implemented, but envelopes, and pedal down samples are still missing. You hear clicks, the sound is a bit undynamic but it's already showing what LS is capable of. :) The gig gile is a 900MB (compressed, 1.5GB uncompressed) piano sample, 3 layers peal up, 3 layers pedal down (not used yet). here is the audio file: (3MB) http://www.linuxsampler.org/examplesustain.ogg Will commit the sustain stuff to CVS tomorrow on Monday. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: David O. <da...@ol...> - 2003-11-07 21:26:07
|
Speaking of timestamped event systems (in another thread); I have some=20 code lying around; one working implementation inside Audiality, and=20 one very similar prototype for XAP, based on the Audiality code. I=20 never got around to make the latter available to the public, but=20 OTOH, it doesn't really differ from the Audiality event system in any=20 interesting way, if you just want to have a look at a working design. Anyway, there's one important design decision I'm thinking about: =09To queue or not to queue... :-) In Audiality, I'm never queueing events beyond the end of the current=20 audio buffer. The obvious disadvantage is that envelope generators=20 and the like cannot be "smart" and prequeue events and then go to=20 sleep - but OTOH, with prequeueing, input events may force prequeued=20 events to be taken back or modified. Not doing any prequeueing keeps=20 things simple, since event queues *are* actually simple queues, as=20 opposed to random access sequencers. (In Audiality, I do have a sorting "insert" operation, though it's not=20 used, and probably never will be. XAP wasn't even meant to have such=20 a thing. Events are always required to be sent in timestamp order.=20 The addressing and routing system ensures that multiple event streams=20 to the same input get sort/merged properly, and without cost for 1->1=20 connections.) Another advantage with avoiding prequeueing is that "now" is the time=20 span of the current audio buffer, and that's all we wory about for=20 the current buffer cycle. Event processors (such as envelope=20 generators) only need to worry about one buffer period at a time, and=20 never need to take back or modify their output. Unfortunately, there's no way of avoiding the fact that *something*,=20 somewhere, has to schedule things in event processors that deal with=20 timing. An envelope generator has to keep track of when to switch to=20 the next section, even if it's just generating long linear ramps=20 every now and then. One might think that an event system that=20 supports prequeueing will help here, but the more I think about it,=20 the more convinced I get that that's the wrong tool. It complicates=20 things (the event system as well as most of the code that uses it)=20 and just moves the problem to another place in the system. The current Audiality DAHDSR EG generates only one ramp event for each=20 envelope stage. Each EG is a state machine that is called once per=20 buffer cycle, like a plugin in your average VST/LADSPA style system.=20 First, input events (such as "note-off") are evaluated, since they=20 may affect the state of the EG. Then a downcounting timer keeps track=20 of when the next state change is to occur. No state change means the=20 EG just returns. So, the EG never really goes to sleep - but it=20 cannot anyway, since it *has* to poll for input events once per=20 buffer cycle. Although this probably isn't much of a performance issue at this point=20 (there's just one EG per voice ATM! *hehe*), I'm considering some=20 ways of optimizing it, mostly because I want more EGs, LFOs and other=20 stuff, and I don't want all those objects to burn my DSP cycles even=20 when not doing anything. Scalability, that is. (Audiality is meant to=20 run on Pentium class hardware and up.) Considering the above, my conclusion has to be that the answer must be=20 on the other side of the event processor code, so to speak. A=20 scheduler that allows objects to go to sleep, blocking on event input=20 ports and/or timers, making event processors a bit more like=20 processes in an RTOS with message passing. My Audiality EGs for example, would block, waiting for input events=20 *or* timer events. That is, instead of making a guess about the=20 future and sending an event that might have to be taken back, or poll=20 all the time, the EG just says "if no events come in, wake me up in N=20 frames". If no events come in, the block will time out, and the EG=20 will be called again, switch to the next envelope stage, generating=20 the corresponding event(s). Before it returns, it'll decide how long=20 to sleep until the next state change, in case no events come in. (One might use something as simple as the return code from the event=20 processor to pass 'N' to the scheduler. Just say 0 all the time, if=20 you want to run every buffer cycle.) Indeed, this is yet another way of moving a problem somewhere else.=20 However, if the audio thread has a single scheduler that keeps track=20 of all objects, various optimizations become possible. If objects=20 don't have to check input and timers for themselves while "sleeping",=20 some function call overhead (and some event processor complexity) can=20 be eliminated. If that doesn't cut it, arrange all sleeping objects=20 in a priority queue, and only keep track of when to wake up the next=20 object. Note that I was mostly thinking about Audiality's low end scaling when=20 thinking this up, so it might not be viable for LinuxSampler. OTOH,=20 it could probably make event processing slightly simpler, whether you=20 care to implement an optimized scheduler or not. Either way, a good=20 discussion might benefit both projects, even if we end up using=20 different approaches. Now, those of you who actually read all the way here are probably at=20 least as obsessed with event systems as I am. We should all consider=20 getting a life! ;-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mar...@co...> - 2003-11-07 18:59:25
|
Hey, you're alive! On Fri, 2003-11-07 at 10:45, Christian Schoenebeck wrote: <SNIP> > We have a CVS log running but it's not included into the LS webpage yet. > Perhaps we'll add that tomorrow, so you can instantly see the progress of LS > on the Webpage. That would be nice, but don't do it just for me. > > We decided to implement a time stamp event system next; it seems Benno has > already a pretty accurate idea how it should look like, but we have to > discuss some issues before starting to code that. The event system should be > compliant with the most important input standards (e.g. VSTi) and be prepared > for things that might come in future. Maybe we'll discuss that on the list. > So articulation handling has to wait until we finished the event system. OK. > > > As always, I'll be here to help run tests when you need me. > > Appreciate that, but for the next couple of days you're free! Use it before we > come and grab you! :) > > Enough infos? > > CU > Christian > > P.S. no fear - the project won't freeze again! Great! |
|
From: Christian S. <chr...@ep...> - 2003-11-07 18:45:39
|
Es geschah am Freitag, 7. November 2003 18:01 als Mark Knecht schrieb: > Hi, > It's too quiet here. > > Maybe the developers can fill us user types in on what they are > working on and when we non-developers will see something new? :) Ok, Benno is currently enabling multiple voices per MIDI Key. I expect him to commit perhaps tomorrow evening. I replaced the plain Makefile by autools files (that is already in CVS). We have a CVS log running but it's not included into the LS webpage yet. Perhaps we'll add that tomorrow, so you can instantly see the progress of LS on the Webpage. We decided to implement a time stamp event system next; it seems Benno has already a pretty accurate idea how it should look like, but we have to discuss some issues before starting to code that. The event system should be compliant with the most important input standards (e.g. VSTi) and be prepared for things that might come in future. Maybe we'll discuss that on the list. So articulation handling has to wait until we finished the event system. > As always, I'll be here to help run tests when you need me. Appreciate that, but for the next couple of days you're free! Use it before we come and grab you! :) Enough infos? CU Christian P.S. no fear - the project won't freeze again! |
|
From: Mark K. <mar...@co...> - 2003-11-07 17:01:26
|
Hi, It's too quiet here. Maybe the developers can fill us user types in on what they are working on and when we non-developers will see something new? As always, I'll be here to help run tests when you need me. Cheers, Mark |
|
From: <be...@ga...> - 2003-11-06 15:57:00
|
Hi Mark, thanks for the screenshots you sent to Christian. At this point I'd like Christian analyzing your result and writing a nice document that explains how to shape GSt's velocity curves and envelopes ( what kind of ADSR they use), and how to extract the envelope parameters (ADSR times, levels) and curve types from libgig. I have a nice sample accurate event system in mind that permits arbitrary modulation of pitch and volume with very low CPU overhead. Of course it will be a bit underutilized when rendering GSt style envelopes/modulation but the event system is simple the work amount to implement it is comparable to writing a hardcoded enveloping system. The event system will be a solid foundation for future very flexible modulation schemes. So I propose instead of Christian writing first his document and based on this I'll get more insight how the velocity/enveloping/modulation stuff works and will quickly be able to figure how to fit the event system in our GIG playback engine. This document will be useful for other developers too since the more people that understand the inner working of the modulation stuff the better engine we will able to design. cheers, Benno http://www.linuxsampler.org Scrive Mark Knecht <mar...@co...>: > FYI to the list - I have sent Christian a 1 page pdf screen shot of a > range of curves the GSt produces for different MIDI velocity scaling > values. The file is 211K (178K zipped) and hence too large to send to > the list. If anyone else would specifically like a copy, drop me a note > and I'll send it on to you. > > Cheers, > Mark > > > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Mark K. <mar...@co...> - 2003-11-05 15:34:19
|
FYI to the list - I have sent Christian a 1 page pdf screen shot of a range of curves the GSt produces for different MIDI velocity scaling values. The file is 211K (178K zipped) and hence too large to send to the list. If anyone else would specifically like a copy, drop me a note and I'll send it on to you. Cheers, Mark |
|
From: Robert J. <rob...@da...> - 2003-11-04 12:15:43
|
Hi, > > yes GMPI will be very cool, finally a free VSTi ? :-) > I heard Steinberg is not partecipating (boycotting it ?), I guess > because VST ist the de-facto standard and it is advantageous for them > controlling the format possibly giving them an edge over the competition. > Sad .... , I always heard "opensource leads to fragmentation" .... but > to me the Windows audio world seems much more fragmented. > VST, DirectX, RTAS, EASI, GSIF, Rewire .... > on linux things do look much better: jack and LADSPA. > I think an important API that is still missing is a sort of VSTi, > would it better to extend LADSPA to support MIDI in too or is it better > to wait for GMPI ?=20 Getting off topic but this is fun :) MusE actually has an internal plugin interface like that. LADSPA +=20 Alsa-sequencer. It's a lightweight solution that works quite well. It has o= ne=20 major drawback though, it can't support non-realtime rendering, not to my=20 knowledge atleast. > I think if we had a VSTi-like API it would lead to=20 > a big proliferation of soft-synth and sampler plugins. > But for now we have to write standalone apps that output the audio via ja= ck > and perform MIDI input via the ALSA interface. I hardly know all about this so I'm probably missing a lot of intricate=20 problems... I do however have another view of this... =46or long term (say 2-3 years from now) I think the Jack solution with sep= arate=20 apps that talk to jackd could actually be even more powerful than a plugin= =20 api. What is needed however is a working session handling system that can launch= =20 the connection graph. LADCCA is a step in the right direction, I think it=20 needs to do more though... > (which is nice but does not offer synchronous transfer, sample accuracy > etc) I'm not 100% what this means, are you saying it wouldn't be possible to sol= ve=20 this within the realms of jack ? Assuming it's not possible now... What I see as the major reason for using plugins as opposed to jack apps is= =20 that they require less CPU overhead. (That they are in-proc is just a=20 technicality in my opinion.) The drawback is that there are all kinds of painful stuff that needs to be= =20 taken into account, stability, gui-toolkits, memory handling perhaps,=20 possibly others... Extrapolating into the future, CPU overhead (for ipc) is bound to become a= =20 lesser and lesser problem... when CPU is no longer a problem we invent new= =20 abstractions, in my mind Jack is such an abstraction. An interesting question would be how much of the work that goes into GMPI=20 (which I understand has major brain power behind it ;) would be applicable = to=20 a jack-like api ? > > > On Monday 03 November 2003 18.57, Mark Knecht wrote: > > [...] > > > > > If the conversion is not part of LS, then what's the additional > > > latency incurred when playing a keyboard live? How long are MIDI > > > event's delayed going through a completely separate app? Is it > > > deterministic, or will it vary from event to event? > > > > If you have an additional process that must be scheduled to process > > every event, there is a significant risk of increased worst case > > latency. I'm not sure how likely it is that an event is hit by two > > "slow" reschedules (one in the MIDI processor and one in LS), but > > it's most probably possible. > > Same applies to JACK but I think with a good lowlatency kernel > the additional latency is measured in 50-100usec max. > This means writing an ALSA user space midi router pratically does not > degrade the MIDI timing. Interesting. I got intrigued by this and started poking at some code last=20 night, we'll see if I can make something useful out of it. I was thinking along the lines of a general purpose midi-filter that could = do=20 lots of stupid stuff: =2D velocity expand =2D velocity compand =2D delay =2D arpeggio =2D randomize =2D split =2D whatever /Robert - <who is putting up his blast shield> > I=B4ve seen keyboards that have builtin sequencers that run with a > 500Hz (2msec period) timer and the MIDI files sound very well. > So the ALSA user space router is absolutely not a problem. > > > RT-Dave, the control engineer, would assume this *will* happen > > occasionally, effectively doubling the worst case latency, until > > proven wrong. ;-) > > I=B4ll do some latency graphs with jack + jack client when adding > jack support to LS so we can measure if direct ALSA output vs > jack output. > I think with the right lowlatency kernel jack output a 3msec can > be done reliably and that is that LS needs for tight note-on response. > > > There most certainly will be an increase in minimum and average > > latency, but as long as event processing is done "instantly" (by > > blocking on input and sending the resulting events instantly; no > > timers and stuff in the MIDI processor), that effect should be > > insignificant. (Microseconds...) > > Exactly ALSA userspace MIDI routers are implemented using > poll() so they block until a MIDI event arrives. > This means they respond instantly (minus the context switch time) > > > [...] > > > > > On the other hand, since almost all of my MIDI goes through the > > > Alsa stack somehow, and I view connections with kaconnect, could > > > that be a place to put these velocity modifiers? > > > > Well, that was my first though when I started following this thread - > > but unfortunately, ALSA runs only on Linux. (And QNX, though I have > > no idea what state that stuff is in nowadays.) > > Don=B4t worry ALSA=B4s MIDI timing is excellent, no QNX needed. > > > It would be nicer IMHO, if things like the "MIDI corrector" could use > > some portable plugin API - but OTOH, it can't be all that hard to > > port it to various APIs. No big deal. What's important is that it > > runs at the right place in the chain, and that it doesn't add > > significant latency. > > Of course a builtin MIDI corrector is better (eg the table lookup) but > the ALSA userspace router is ok too. > > Anyway it is just a waste of time discussing about the midi corrector > stuff, we have much bigger problems, getting looping, enveloping, LFOs and > articulation working. > > David: I told Christian we should implement a simple sample accurate event > system in LS right from start because it will avoid us many troubles > later. For example we can use the event system to do fast enveloping > (lists of linear segments, this means sample accurate ramping and very fa= st > because you only need to increment the pitch value (pitch modulation) > and/or volume value (volume modulation) by a delta amount. > Exponential curves can be approximated by a succession of linear segments > and you could even modulate the pitch/volume in an arbitrary way by sendi= ng > events with a frequency of eg. samplerate/4 which would still be very fas= t. > > With current ALSA MIDI IN we cannot yet exploit the sample accurate event > system fully but if something like VSTi for Linux comes out LS will be > ready for sample accurate MIDI events. > Not to mention that we can lower the current realtime MIDI IN jitter when > LS is played live thanks of delaying the midi in events based on the > timestamp (we run the MIDI IN thread with higher priority than the audio > thread thus MIDI in timing can have sub fragment-time precision). > > When some event code will be available in LS I=B4d like you David to revi= ew > it a bit since you are the timestamped-events master :-) > > BTW: the switch() statement seems faster than function pointers since > it constructs a jumping table and does not need to save the return > address on the stack. > I think switch() will be ideal in the audio rendering routine to select > various rendering functions, eg. sample playback with linear interpolatio= n, > cubic, with and without filter etc. > > > cheers, > Benno > http://www.linuxsampler.org > > > ------------------------------------------------- > This mail sent through http://www.gardena.net > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > Does SourceForge.net help you be more productive? Does it > help you create better code? SHARE THE LOVE, and help us help > YOU! Click Here: http://sourceforge.net/donate/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |