You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
|
From: Marc H. <mha...@un...> - 2003-03-22 13:49:25
|
I would like it better if you had a base class that I could derive from. Did you define the internal APIs? |
|
From: Josh G. <jg...@us...> - 2003-03-22 00:49:40
|
On Fri, 2003-03-21 at 05:20, Marc Halbruegge wrote: > I'd like to implement some code for Kurzweil file import/export for > your project. > > visit > http://kurzfiler.sf.net and > http://kcdread.sf.net > > to see things I've already done for the machine. > > Greetings > Marc Halbrügge > Perhaps you would be interested in libInstPatch then? Its part of the Swami project (http://swami.sourceforge.net). You will find some API docs on it on the Swami developers page: http://swami.sourceforge.net/devel.php libInstPatch is an object oriented instrument patch library and has support for SoundFont2 and DLS2 load support was just added. The programming architecture is C using glib/GObject for object oriented stuff. Interest has been expressed in using it in LinuxSampler. Let me know if you are interested and I can help you to add Kurzweil. Do you know of any docs on the file format? Cheers. Josh Green |
|
From: Josh G. <jg...@us...> - 2003-03-22 00:13:22
|
On Fri, 2003-03-21 at 02:05, Paul Kellett wrote:
> A long time ago, Benno, me and Ruben van Royen (I think) did some
> investigation. I've sent the resulting files to Josh in case there is
> information there that isn't buried in the "Evo" source code
somewhere.
>
I thank you for that material. Looks like there is some very useful
findings in there, and it will save me a lot of work. I'm curious about
how .gig does regions. From what I can tell it looks like note regions
can't overlap, therefore no layering?
> A chunk size of 3 is allowed in RIFF, but the chunk spacing has to be
even
> so the 4th byte after the "nam0" is just padding.
>
Yeah, I know this now. I'm just so accustomed to seeing even byte chunk
sizes and thought that if a block was padded it would be reflected in
the chunk size as well.
>
> > rather than file extension. Although the DLS standard is designed to
> > allow custom chunks. I'm curious if these chunks are registered with
the
> > MIDI Manufacturers Association (ones who designed DLS)?
>
> I always assumed they used .GIG instead of .DLS even though it is a
DLS file
> so they didn't have to pay the MMA to register the chunks, or so they
didn't
> have to provide details on what they contained.
>
>
Yeah, I guess they don't consider that the data of a file is a sort of
namespace and they are messing it up :) I'm trying to think of the best
way of handling the loading of .gig files. Since libInstPatch is going
to be supporting multiple formats, I'll have to do some trickery when it
comes to gig files. I guess the DLS parser will be going along doing its
thing and if it runs into a .gig specific chunk, switch to .gig mode or
something. Its really too bad there wasn't some sort of identifier at
the top of the DLS file. A GigaSampler DLS2 conditional DLSID would have
been sufficient.
>
> Paul.
>
Cheers.
Josh
|
|
From: Marc H. <mha...@un...> - 2003-03-21 13:20:20
|
I'd like to implement some code for Kurzweil file import/export for your project. visit=20 http://kurzfiler.sf.net and http://kcdread.sf.net to see things I've already done for the machine. Greetings Marc Halbr=FCgge |
|
From: Paul K. <pau...@md...> - 2003-03-21 10:25:01
|
Josh Green <jg...@us...> wrote:
>
> custom chunks that aren't part of the DLS standard. I think it will take
> a little bit of reverse engineering to figure out what they contain.
A long time ago, Benno, me and Ruben van Royen (I think) did some
investigation. I've sent the resulting files to Josh in case there is
information there that isn't buried in the "Evo" source code somewhere.
> The weirdest thing I saw was a "nam0" chunk in one .gig file that had a
> size of 3. This apparently breaks the RIFF spec I believe, because chunk
> sizes are supposed to be even. Also it appears to be actually 4 bytes in
> length, since the next chunk starts 4 bytes later (maybe its just a
> broken .gig file?).
A chunk size of 3 is allowed in RIFF, but the chunk spacing has to be even
so the 4th byte after the "nam0" is just padding.
> rather than file extension. Although the DLS standard is designed to
> allow custom chunks. I'm curious if these chunks are registered with the
> MIDI Manufacturers Association (ones who designed DLS)?
I always assumed they used .GIG instead of .DLS even though it is a DLS file
so they didn't have to pay the MMA to register the chunks, or so they didn't
have to provide details on what they contained.
Paul.
_____________________________
m a x i m | digital audio
http://mda-vst.com
_____________________________
|
|
From: Mark K. <mk...@co...> - 2003-03-20 15:44:46
|
> > > Some screen shots might be helpful :) If you could put them somewhere > were I could passively download them, probably not appropriate to post > to the list (I'm on dialup anyways, so I don't like getting big emails). > The most important stuff is where effects and parameters are set. > > > I think there are two things in GSt to keep in mind. They sort of > > interact, but we'll have to decide what to focus on: > > > > 1) The gig files themselves - what's in them, etc. This is the basic > > sound. In GSt these have some ADSRs, LFO (I think) and most important, > > layering of samples across a number of parameters like MIDI velocity.. > > > > Okay. Those are the parameters I am talking about. What I am looking for > are specifics on them. How many ADSRs, LFOs. What effects can be > modulated by these ADSRs/LFOs (pitch, filter, volume, etc). I need to > get an idea of what the constraints are to be able to determine whats in > the files. It may end up that I have to have access to the program to > try changing parameters and seeing what effect they have on the > resulting file. > > > 2) Performance files. These set up how a user wants to use the gig files > > in GSt. This sets up the mixer, the effects, and other things. I don't > > actually do to much with performance files other than make sure I get > > the right instruments loaded on the right MIDI channel, but they do much > > more. > > > > Performance files, huh.. That sounds like more to do with how the > instruments are routed and effects applied on top. Is a peformance file > something you would setup and use on your own system for multiple gig > files and not tend to distribute to others? > Josh, I found a Windows Compiled Help file online at Tascam support for GigaStudio. Understanding that you may or may not be able to view that (based on whether you have Windows available) I've printed it's main sections as PDF files. However, I really recommend using the help file itself as it's all cross-linked and you'll learn much faster. You can find all of this online at http://www.controlnet.com/~makeMusic/Linux/GigaStudio Good luck, Mark |
|
From: Josh G. <jg...@us...> - 2003-03-20 01:37:36
|
On Wed, 2003-03-19 at 16:31, Mark Knecht wrote: > Josh, > Exciting news here, or at least for me. > > PLEASE - use me and my Gig library as a resource. If I can be of help > somehow, please let me know and I'll try. > I may write a test program to mine the .gig specific stuff so it can be analyzed to try and figure out whats inside those custom chunks. I'll be sure to give you a copy to run on a directory of .gig files if it comes to that. > On Wed, 2003-03-19 at 15:45, Josh Green wrote: <cut> > > would probably help if I'd used GigaStudio before. Can anyone comment on > > what the synthesis model is? Is it fixed like DLS2/SF2 is (2 LFOs, 2 > > Envelopes, low pass filter, etc) or is it modular? > > I'm not sure I understand this question exactly, but I think it's a > fixed model, or at least relatively fixed. I could post some screen > shots or something like that if it would be helpful. > Some screen shots might be helpful :) If you could put them somewhere were I could passively download them, probably not appropriate to post to the list (I'm on dialup anyways, so I don't like getting big emails). The most important stuff is where effects and parameters are set. > I think there are two things in GSt to keep in mind. They sort of > interact, but we'll have to decide what to focus on: > > 1) The gig files themselves - what's in them, etc. This is the basic > sound. In GSt these have some ADSRs, LFO (I think) and most important, > layering of samples across a number of parameters like MIDI velocity.. > Okay. Those are the parameters I am talking about. What I am looking for are specifics on them. How many ADSRs, LFOs. What effects can be modulated by these ADSRs/LFOs (pitch, filter, volume, etc). I need to get an idea of what the constraints are to be able to determine whats in the files. It may end up that I have to have access to the program to try changing parameters and seeing what effect they have on the resulting file. > 2) Performance files. These set up how a user wants to use the gig files > in GSt. This sets up the mixer, the effects, and other things. I don't > actually do to much with performance files other than make sure I get > the right instruments loaded on the right MIDI channel, but they do much > more. > Performance files, huh.. That sounds like more to do with how the instruments are routed and effects applied on top. Is a peformance file something you would setup and use on your own system for multiple gig files and not tend to distribute to others? > > > > > Since I haven't seen any DLS articulation (effect parameter) data in > > .gig files I'm assuming most of these chunks (especially in regions) is > > custom .gig effect parameters. The content of many of the chunks seem to > > be the same across different files, so perhaps some of them can be > > ignored. > > I don't understand 'DLS'. Sorry. However, I 'think' much of what you are > thinking about might be in the Performance file and not in the Gig file. I don't think so, but it would be good to make sure. What kind of effects are defined in the performance file? Is it mainly for adding effects on top of audio channels? > > Please understand that I know absolutely nothing about the internals, > but I hope to be helpful, without being a hinderance I hope. You've already been a help :) Any info I can get on how GigaSampler works from the user interface will help me determine the format internals. Regards. Josh |
|
From: Mark K. <mar...@at...> - 2003-03-20 00:31:47
|
Josh, Exciting news here, or at least for me. PLEASE - use me and my Gig library as a resource. If I can be of help somehow, please let me know and I'll try. On Wed, 2003-03-19 at 15:45, Josh Green wrote: > The DLS parser in libInstPatch is now functioning to the point of > loading a DLS file (I'm sure there are some bugs still). I threw a > couple of .gig files at it and they seem to load fine too. To help debug > things I wrote a test program to output the RIFF chunks and I found many > custom chunks that aren't part of the DLS standard. I think it will take > a little bit of reverse engineering to figure out what they contain. It > would probably help if I'd used GigaStudio before. Can anyone comment on > what the synthesis model is? Is it fixed like DLS2/SF2 is (2 LFOs, 2 > Envelopes, low pass filter, etc) or is it modular? I'm not sure I understand this question exactly, but I think it's a fixed model, or at least relatively fixed. I could post some screen shots or something like that if it would be helpful. I think there are two things in GSt to keep in mind. They sort of interact, but we'll have to decide what to focus on: 1) The gig files themselves - what's in them, etc. This is the basic sound. In GSt these have some ADSRs, LFO (I think) and most important, layering of samples across a number of parameters like MIDI velocity.. 2) Performance files. These set up how a user wants to use the gig files in GSt. This sets up the mixer, the effects, and other things. I don't actually do to much with performance files other than make sure I get the right instruments loaded on the right MIDI channel, but they do much more. > > Since I haven't seen any DLS articulation (effect parameter) data in > .gig files I'm assuming most of these chunks (especially in regions) is > custom .gig effect parameters. The content of many of the chunks seem to > be the same across different files, so perhaps some of them can be > ignored. I don't understand 'DLS'. Sorry. However, I 'think' much of what you are thinking about might be in the Performance file and not in the Gig file. > > The weirdest thing I saw was a "nam0" chunk in one .gig file that had a > size of 3. This apparently breaks the RIFF spec I believe, because chunk > sizes are supposed to be even. Also it appears to be actually 4 bytes in > length, since the next chunk starts 4 bytes later (maybe its just a > broken .gig file?). > > I think its also quite lame that there doesn't seem to be a way to > identify a .gig file from a DLS file besides its file extension (and > running across those custom chunks). I think this is kind of stupid and > makes things difficult when determining the file type from its data > rather than file extension. Although the DLS standard is designed to > allow custom chunks. I'm curious if these chunks are registered with the > MIDI Manufacturers Association (ones who designed DLS)? Please understand that I know absolutely nothing about the internals, but I hope to be helpful, without being a hinderance I hope. > > Cheers. > Josh > > P.S. here are some details on the custom chunks that I found. > Indentation indicates embedded chunks, the toplevel chunk listed for > each of these blocks is the last standard DLS chunk in the ancestry. > > 'DLS '<LIST> (Standard DLS RIFF chunk) > '3gri'<LIST> (size = 88) > '3gnl'<LIST> (size = 76) > '3gnm' (size = 64) (values seen: "Default Sample Group") > 'einf' (size = 98) > > 'rgn '<LIST> (Standard DLS region, effect parameters, key splits, etc) > '3lnk' (size = 172) > '3prg'<LIST> (varies) > '3ewl'<LIST> (size = 196) > 'wsmp' (size = 36) (seems identical to Standard DLS wsmp chunk) > '3ewa' (size = 140) > '3dnl'<LIST> (varies) > 'nam0' (size = 3!!!) > '3ddp' (size = 10) > > 'lart'<LIST> (DLS articulation data [effect parameters] list) > '3ewg' (size = 12) > > 'wave'<LIST> (DLS sample chunk) > '3gix' (size = 4) > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Does your code think in ink? > You could win a Tablet PC. Get a free Tablet PC hat just for playing. > What are you waiting for? > http://ads.sourceforge.net/cgi-bin/redirect.pl?micr5043en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Josh G. <jg...@us...> - 2003-03-19 23:46:26
|
The DLS parser in libInstPatch is now functioning to the point of
loading a DLS file (I'm sure there are some bugs still). I threw a
couple of .gig files at it and they seem to load fine too. To help debug
things I wrote a test program to output the RIFF chunks and I found many
custom chunks that aren't part of the DLS standard. I think it will take
a little bit of reverse engineering to figure out what they contain. It
would probably help if I'd used GigaStudio before. Can anyone comment on
what the synthesis model is? Is it fixed like DLS2/SF2 is (2 LFOs, 2
Envelopes, low pass filter, etc) or is it modular?
Since I haven't seen any DLS articulation (effect parameter) data in
.gig files I'm assuming most of these chunks (especially in regions) is
custom .gig effect parameters. The content of many of the chunks seem to
be the same across different files, so perhaps some of them can be
ignored.
The weirdest thing I saw was a "nam0" chunk in one .gig file that had a
size of 3. This apparently breaks the RIFF spec I believe, because chunk
sizes are supposed to be even. Also it appears to be actually 4 bytes in
length, since the next chunk starts 4 bytes later (maybe its just a
broken .gig file?).
I think its also quite lame that there doesn't seem to be a way to
identify a .gig file from a DLS file besides its file extension (and
running across those custom chunks). I think this is kind of stupid and
makes things difficult when determining the file type from its data
rather than file extension. Although the DLS standard is designed to
allow custom chunks. I'm curious if these chunks are registered with the
MIDI Manufacturers Association (ones who designed DLS)?
Cheers.
Josh
P.S. here are some details on the custom chunks that I found.
Indentation indicates embedded chunks, the toplevel chunk listed for
each of these blocks is the last standard DLS chunk in the ancestry.
'DLS '<LIST> (Standard DLS RIFF chunk)
'3gri'<LIST> (size = 88)
'3gnl'<LIST> (size = 76)
'3gnm' (size = 64) (values seen: "Default Sample Group")
'einf' (size = 98)
'rgn '<LIST> (Standard DLS region, effect parameters, key splits, etc)
'3lnk' (size = 172)
'3prg'<LIST> (varies)
'3ewl'<LIST> (size = 196)
'wsmp' (size = 36) (seems identical to Standard DLS wsmp chunk)
'3ewa' (size = 140)
'3dnl'<LIST> (varies)
'nam0' (size = 3!!!)
'3ddp' (size = 10)
'lart'<LIST> (DLS articulation data [effect parameters] list)
'3ewg' (size = 12)
'wave'<LIST> (DLS sample chunk)
'3gix' (size = 4)
|
|
From: Josh G. <jg...@us...> - 2003-03-19 00:38:09
|
On Tue, 2003-03-18 at 15:16, Peter Hanappe wrote:
>
> Streamed sample playback is on my todo list for quite a while now.
> However, the first goal was to develop a synthesizer that is MIDI
> and SoundFont compatible. Version 1.0 pretty much satisfies that goal.
> As soon as we split of the development branch (version 1.1.x) we can
> start adding sample streaming.
>
Thanks for clarifying that.
> Maybe the better thing to do is to improve the current
> sfloader and sample API and add the intelligence of the sample
> streaming (i.e. caching, preloading) in libInstPatch rather than in
> FluidSynth.
>
It would seem like something that could be implemented with the sfloader
sample API with some callback functions. As far as Swami, I think the
streaming functionality would be part of the libswami FluidSynth plugin
rather than libInstPatch, whose scope is currently patch file specific.
Sample streaming functionality is not likely to be something very
re-usable at any rate.
Preloading is probably possible now using the preset notify function
that you just added. So when a preset first gets selected (bank:program
change) the application (in this case Swami) could preload a chunk of
all samples of that preset. The only thing I think missing is a way to
stream the actual sample data from the app to FluidSynth. I guess there
are probably a couple ways to go about this though, both using callback
methods to fetch the audio?
If it was a continous audio stream then it would be up to the app to do
its own looping and would have to continously feed audio through a
callback (even for small looped samples), although perhaps the callback
could return with a code to stop audio for say the end of a single shot
sample. Prototype (naming convention probably wrong):
/**
* @sample: Sample structure allocated by app
* @size: Number of samples requested
* @buffer: Buffer to store the samples
*
* Returns: Number of samples transferred to buffer. Any size less than
* @size will cause the stream to stop.
*/
typedef int (*FluidSFLoaderStreamContinous)(iiwu_sample_t *sample,
guint size, fluid_float_t *buffer);
The other method would be that FluidSynth would manage the sample and
looping parameters and would pass a sample position along with the
number of samples to the callback function when fetching audio.
typedef void (*FluidSFLoaderStreamSample)(iiwu_sample_t *sample,
guint offset, guint size, fluid_float_t *buffer);
Does that make any sense? Perhaps both methods should be implemented? As
far as sample caching. Is it necessary to implement one's own caching
mechanism (rather than rely on the OS?) I'm sure the answer to this is
probably yes.
>
> >>From what I currently know of FluidSynth goals, it currently falls short
> > of the linuxsampler goal of multi patch format support. FluidSynth is
> > likely to remain SoundFont based, but its sfloader API that I use to
> > interface Swami's instruments to it is generic enough to allow any
> > format to be synthesized (within the constraints of SoundFont
> > synthesis).
>
> I'm of the opinion that it's better that FluidSynth implements the
> SoundFont synthesis model as efficiently as possible instead of trying
> to become a swiss army knife for wavetable synthesis. I think the users
> benefit more from having the choice between several small but optimized
> synthesizers rather than one big, buggy synthesizer. As you said,
> additional patch types can be supported through libInstPatch, if they
> map reasonably well to the SoundFont synthesis model.
>
> Peter
>
Sounds good to me :) Cheers.
Josh
|
|
From: Peter H. <pe...@ha...> - 2003-03-18 23:11:58
|
Josh Green wrote: > On Tue, 2003-03-18 at 01:12, Steve Harris wrote: > >>Givent that one of the main goals of linuxsampler was to have disk >>streamed sample playback, how much effort is required to add this, and >>would it fit with the currect architecture - I guess it would go into fluid? >> >>- Steve >> > > > I have suggested streamed sample playback on the FluidSynth list a few > times. It sounds like it is on the todo list for next major release (who > knows when that will be). <chop> > The current view of some of > the FluidSynth developers is that it isn't really necessary. Streamed sample playback is on my todo list for quite a while now. However, the first goal was to develop a synthesizer that is MIDI and SoundFont compatible. Version 1.0 pretty much satisfies that goal. As soon as we split of the development branch (version 1.1.x) we can start adding sample streaming. Maybe the better thing to do is to improve the current sfloader and sample API and add the intelligence of the sample streaming (i.e. caching, preloading) in libInstPatch rather than in FluidSynth. >>From what I currently know of FluidSynth goals, it currently falls short > of the linuxsampler goal of multi patch format support. FluidSynth is > likely to remain SoundFont based, but its sfloader API that I use to > interface Swami's instruments to it is generic enough to allow any > format to be synthesized (within the constraints of SoundFont > synthesis). I'm of the opinion that it's better that FluidSynth implements the SoundFont synthesis model as efficiently as possible instead of trying to become a swiss army knife for wavetable synthesis. I think the users benefit more from having the choice between several small but optimized synthesizers rather than one big, buggy synthesizer. As you said, additional patch types can be supported through libInstPatch, if they map reasonably well to the SoundFont synthesis model. Peter > Josh > |
|
From: Mark K. <mar...@at...> - 2003-03-18 13:00:18
|
On Tue, 2003-03-18 at 01:08, Steve Harris wrote: > > But, yeah. All the developers on htis list should try swami before > discussing development any further. > > - Steve I agree. Josh has done a lot, and even if they chose not to do this, there is a lot to learn from his work. |
|
From: Josh G. <jg...@us...> - 2003-03-18 11:59:24
|
Just updated libInstPatch API docs. They can be found from the development page on the Swami web site, heres a direct link: http://swami.sourceforge.net/api/libinstpatch/ Also put directions for checking out swami-1-0 CVS branch on the download page. Please note that all development is occuring on the new CVS branch and that the current swami-0.9.1 release is hardly related. Cheers. Josh Green |
|
From: Josh G. <jg...@us...> - 2003-03-18 11:25:54
|
On Tue, 2003-03-18 at 01:12, Steve Harris wrote: > On Tue, Mar 18, 2003 at 12:49:30AM -0800, Josh Green wrote: > > I now feel it appropriate to divulge in a more in depth description of > > the architecture here, forgive me if it seems lengthy or overwhelming. > > Although I am quite proud of this project, please try not to take my > > enthusiasm as overly bragging :) > > Thanks. That was helpful. > > Givent that one of the main goals of linuxsampler was to have disk > streamed sample playback, how much effort is required to add this, and > would it fit with the currect architecture - I guess it would go into fluid? > > - Steve > I have suggested streamed sample playback on the FluidSynth list a few times. It sounds like it is on the todo list for next major release (who knows when that will be). The main reason I wanted it was for being able to use the same audio device, without conflict, for doing things like previewing samples off disk and the like. The current view of some of the FluidSynth developers is that it isn't really necessary. I almost tend to agree, since memory is quite cheap these days and when someone talks about several gig patch files, I think they are usually referring to a collection of many instruments rather than a single instrument (how long does one want those samples to be, holding a key down for more than a minute would seem absurd ~10mb CD quality audio). Okay, I'm sure there are those grand piano patches with every single key sampled.. At any rate, I'm sure it probably wouldn't be that hard to add this type of support. I'm no expert on FluidSynth internals though. It would fit in fine with the Swami architecture though, as the wavetable object is fairly abstract in how a wavetable device handles itself. From what I currently know of FluidSynth goals, it currently falls short of the linuxsampler goal of multi patch format support. FluidSynth is likely to remain SoundFont based, but its sfloader API that I use to interface Swami's instruments to it is generic enough to allow any format to be synthesized (within the constraints of SoundFont synthesis). Of note is that DLS2 would seem to fit fairly well into the SoundFont model (seems to be a rip off in some respects), although I do think that DLS2 is much more extendable than SoundFont 2. Things like self compiling code and that kind of stuff (that was being discussed previously), may also be out of the scope of what is planned (CCing FluidSynth list so they can correct me if I'm wrong). I do feel that FluidSynth is quite an excellent synthesizer though, and is fairly complete in its support of SoundFont 2. The modulator support is something of great beauty :) The FluidSynth developers have done a nice job of making it more optimized as well, although I'm sure there is always room for more. Cheers. Josh |
|
From: Steve H. <S.W...@ec...> - 2003-03-18 09:14:01
|
On Tue, Mar 18, 2003 at 12:49:30AM -0800, Josh Green wrote: > I now feel it appropriate to divulge in a more in depth description of > the architecture here, forgive me if it seems lengthy or overwhelming. > Although I am quite proud of this project, please try not to take my > enthusiasm as overly bragging :) Thanks. That was helpful. Givent that one of the main goals of linuxsampler was to have disk streamed sample playback, how much effort is required to add this, and would it fit with the currect architecture - I guess it would go into fluid? - Steve |
|
From: Steve H. <S.W...@ec...> - 2003-03-18 09:09:30
|
On Mon, Mar 17, 2003 at 04:49:05PM -0800, Mark Knecht wrote: > On Mon, 2003-03-17 at 09:02, Steve Harris wrote: > > > > I've had a minor epiphany(sp?) having breifly used some subset of the > > swami/fluidsynth combo I'd say we are stupid to go it out own way, we > > should integrate with the existing stuff as far as possible. > > > > Using the swami UI made me realise just how much work has to go into it > > and it appears (from a cursory glance) to be basicly generic, rather than > > soundfont specific as I thought. > > > > - Steve > > Steve, > Hi. You've been a bit more quiet. Good to see you. Yes, I'm worried that the linuxsampler is one more thing than I can handle. I have a bunch of minor projects on the go that all require coding, and I find more inspirational than a sampler. I'm happy to supply DSP code and so on ofcourse, but I dont think I can contribute to the bulk of the coding. Especially if its in C++. But, yeah. All the developers on htis list should try swami before discussing development any further. - Steve |
|
From: Josh G. <jg...@us...> - 2003-03-18 08:50:01
|
On Mon, 2003-03-17 at 09:02, Steve Harris wrote: > > I've had a minor epiphany(sp?) having breifly used some subset of the > swami/fluidsynth combo I'd say we are stupid to go it out own way, we > should integrate with the existing stuff as far as possible. > > Using the swami UI made me realise just how much work has to go into it > and it appears (from a cursory glance) to be basicly generic, rather than > soundfont specific as I thought. > > - Steve > My thoughts as well :) Of course I'm a little biased in the whole thing, as Swami has been a hobby of mine for almost 4 years now (if including the Smurf SoundFont Editor). In that time I've re-written the API probably somewhere in between 2.5 to 3 times. I am starting to feel like the underlying architecture is quite good now and am looking forward to implementing more functionality on top of it. Keep in mind that I am speaking of the swami-1-0 CVS branch which has not yet been released (as in packaged, hasn't quite come together yet). I really think Swami could use some more developers, so if anyone is interested, please contact me or join the Swami mailing list. I now feel it appropriate to divulge in a more in depth description of the architecture here, forgive me if it seems lengthy or overwhelming. Although I am quite proud of this project, please try not to take my enthusiasm as overly bragging :) Swami is based on 2 support libraries libInstPatch and libswami with the GUI currently being built on top of these (although a good portion of it may turn into a library as well). These libraries use glib/GObject extensively (for those who aren't familiar with GObject its an object oriented library for C, it provides an extendable type system, inheritable classed objects with signals and properties). This means that the underlying libraries and the functionality they bring do not rely on GUI related libs (X, GTK, etc). The GUI uses GTK2 and is also very object oriented (well, its getting there). I chose this architecture because I wanted the underlying libraries to use C which I feel is the least common multiple when it comes to programming languages, and therefore would increase the scope of use, but also wanted the benefit of an object oriented system. There are also C++ bindings for GTK2 (which I'm sure includes GObject, although I have not researched this). libInstPatch - Instrument patch library --------------------------------------- The library that interfaces to patch files. SoundFont files are currently supported and DLS2/gig are just now being added. Patch files are loaded/saved from a child/parent tree structure of objects which can be easily manipulated using the GObject properties mechanism. The current focus is the object system, there is a little work that needs to be done to make the file parsers usable for those who don't want to use the runtime objects (say if just scanning a patch file for some specific info). This library also transparently manages sample data sources (file, RAM, ROM, swap file, etc) and different bit widths (some work will need to be done for bit width conversion). libswami - Everything else not directly part of the GUI or patch files --------------------------------------- - Maintains a master patch tree utilizing signals for change notification - GObject type based plugin system (GStreamer inspired) - XML save/restore state interface (also GStreamer inspired) - Wavetable and MIDI objects - event queues - Undo/Redo state history (tree based so no loss of actions will occur after an undo/do something else), also utilizes action dependency so arbitrary items can be undone (provided they don't depend on later actions). The GUI uses a master event queue (the same events that are logged to the state history) to track patch object changes and will update itself on regular intervals (necessary since GTK is not multi-threaded). This also means that changing an object property will cause any other GUI views to also be updated. Whats ahead (lots of things to do) --------------------------------------- Python support - pygtk has some nice scripts to generate python interfaces for GObject code. Using this, much of the API is generated, only some functions need to be manually written. With python I can see a powerful web based instrument database being created. Also a GUI embedded shell for doing complex editing tasks or creating scripts. Flexible object oriented GUI and session load/save state (utilizing XML). Much of this is already underway. Generic control interface (using GValue, a container for different data types, int, float, string, etc). This will be used for MIDI controls, instrument effects, any object property, and all other things that can be thought of as a control. The GUI will utilize this for flexible control layouts, etc. Currently in the works. Flexible canvas widget. Move sample waveform display (loop setting) to use canvas so effects (envelopes, LFOs etc) can be overlayed on top. Also allow any generic control to be controlled on a canvas. A segment based waveform modeller (also canvas based). Use lines, curves and freeform segments to create sample waveforms with loop support. A nice side effect is multiple samples can be created at different tones with high precision. Instrument patch conversion GUI hooks for DLS2/gig files Additional patch formats (Virtual banks, Akai, GUS, etc) Auto sample tuning with fftw (plugin already written, no GUI yet) Free compression standard created for patch files using FLAC (a simple proof of concept SoundFont compressor has already been written, I'd like to extend the format to include other patch formats like DLS2). I feel like the architecture is ready for all these features. Just needs to be coded :) If you are still reading this, are you interested? I keep feeling like there really isn't that much left for all this to be realized, but sometimes it seems like its taking so long too. If I get some expressed interest in helping with development, I'll be quick to update the API docs on the site, post directions for checking out the swami-1-0 branch, todo stuff, etc. Cheers. Josh Green |
|
From: Mark K. <mar...@at...> - 2003-03-18 00:49:14
|
On Mon, 2003-03-17 at 09:02, Steve Harris wrote: > > I've had a minor epiphany(sp?) having breifly used some subset of the > swami/fluidsynth combo I'd say we are stupid to go it out own way, we > should integrate with the existing stuff as far as possible. > > Using the swami UI made me realise just how much work has to go into it > and it appears (from a cursory glance) to be basicly generic, rather than > soundfont specific as I thought. > > - Steve Steve, Hi. You've been a bit more quiet. Good to see you. At the risk of creating some sort of mega-app, I generally agree. There's no need to create everything new. A Sampler could leverage the work Josh has done and reuse a lot of that interface. It could even integrate into Swami where as Swami becomes the front-end for any number of soft synth apps... Cheers, Mark |
|
From: Steve H. <S.W...@ec...> - 2003-03-17 17:03:41
|
On Thu, Mar 13, 2003 at 06:44:55 -0800, Mark Knecht wrote: > Christian, > Impatience is cool. It makes things happen. > > There hasn't been much going on here. I'm just a user, not a developer, > but GigaStudio is my main sample player and I'd love to move my libraries > over to Linux one day. so... > > 1) If someone wrote a gig file reader, I'd test my 100+ libraries and make > sure they worked. > > 2) There is very limited some code out there for this project. I do not know > what it does, but if you accomplished #1, I'd help see if we could load them > into #2. I've had a minor epiphany(sp?) having breifly used some subset of the swami/fluidsynth combo I'd say we are stupid to go it out own way, we should integrate with the existing stuff as far as possible. Using the swami UI made me realise just how much work has to go into it and it appears (from a cursory glance) to be basicly generic, rather than soundfont specific as I thought. - Steve |
|
From: Marek P. <ma...@na...> - 2003-03-15 17:11:24
|
test |
|
From: Josh G. <jg...@us...> - 2003-03-15 08:08:51
|
There's a plug that readers might find interesting for the release of Swami 0.9.1 at the end of this email, if someone gets tired of reading it :) On Fri, 2003-03-14 at 12:46, Christian Schoenebeck wrote: > On 13 Mar 2003 10:55:02 -0800 > Josh Green <jg...@us...> wrote: > > > > I'm working on it right now actually with libinstpatch. I've already > > written the DLS runtime objects and RIFF parser and DLS loader. All of > > Oh, I remember you mentioned the Gig format would be based on DLS, are > you sure it is and if yes where did you get your information? Have you > found some specs? And what about the Akai formats? > Well, actually it was someone else that mentioned it was based on DLS2. When I look at a gig file in a hex editor, it does indeed identify itself as such. There is a lot of flexibility with DLS2 files, including conditional chunks, so I'm not sure if anything custom is added for .gig files or not. I'll find out pretty soon though, once I start actually testing the new DLS code. In regards to the Akai formats.. This is currently secondary on my list. There is still a bit of work that needs to be done in the area of generalizing Swami in the area of patch formats. Up until recently much of the code was still SoundFont centric. DLS2 will be the first additional format to be added. After this it should be easier to add additional formats to the libInstPatch/Swami API, with the result being an editable patch format that can be software synthesized (within the constraints of SoundFont synthesis in the case of using FluidSynth as the synthesizer). > > new features being added). If you would like to see this code for some > > reason, its in the swami-1-0 branch in Swami CVS > > Hmmm... I could not locate anything DLS related, but maybe I did not > look thoroughly enough. Have you already checked it in? > Yes it is checked in, just not in the main CVS "head" branch. If you are browsing CVS online with viewcvs, you'll want to select the "swami-1-0" branch in the selector at the bottom of the page. Here is the command you should use to checkout the swami-1-0 branch: cvs -z3 -d:pserver:ano...@cv...:/cvsroot/swami checkout -r swami-1-0 swami The only addition being "-r swami-1-0" from a regular checkout command. The swami-1-0 branch is quite different from the current "head" branch. libInstPatch is entirely GObject based, the GUI requires GTK2, and lots of other stuff. Like I mentioned before, its also not quite usable yet (too many new things, not quite integrated yet). > Regards, > Christian > If you have any interest in helping with Swami please let me know. I'm looking for developers. The learning curve might be a bit steep with the glib/GObject/GTK2 architecture, but the code documentation is pretty thorough and I'm willing to help fill in the gaps for someone who was dedicated to the task :) Cheers. Josh Green P.S. I just released Swami 0.9.1 to coincide with the recent FluidSynth 1.0.0 release. FluidSynth is much improved to the last iiwusynth version (its old name). JACK is now usable with Swami as well (FluidSynth does the real work). I use it myself with an Edirol PCR-50 USB keyboard which comes with lots of MIDI controls for modulating effects in real time. I hacked in a global modulators feature into Swami for this purpose, so a set of real time effect controls can be defined for all instruments. Swami: http://swami.sourceforge.net FluidSynth: http://www.fluidsynth.org |
|
From: Florian B. <fl...@ar...> - 2003-03-14 21:28:23
|
Hi there,
> From: "Mark Knecht" <mk...@co...>:
>
> Anyway, I'd encourage you to take you energy and get going
> on something you want to do. There are people waiting out here,
> like me, who just want to make music.
I agree to this statement.
It=B4s funny, it seems that at most Linux music software projects half of
the guys are "musicians" (that do not know too much about programming)
who are waiting for the other half (the "tech guys", who normally care
more about efficient code than scales, harmonies or artistic use of FX)
to come up with something ;-).
Personally I=B4d love to build a (simple) stand-alone hardware sampler
based on Linuxsampler, without screen + mouse. Just to take it along
anywhere, plug in a MIDI-keyboard and play. Well, it may take some time
until then. :-) Unfortunately, I=B4m one of the musician guys - so my
only contribution could be to test something... :o).
Best regards,
Florian Berger, Leipzig, Germany
|
|
From: Christian S. <chr...@ep...> - 2003-03-14 20:46:31
|
On Thu, 13 Mar 2003 10:57:27 -0800 Mitchell <mgjohn@u.washington.edu> wrote: > Actually I have been experiencing similar frustrations. Also, I > don't really like the framework of the current distribution of > linuxsampler, since I would rather work in C (just a personal > preference), Why? Is it just because you're more used to C? Personally I think the OO way is cleaner and easier to read than having thousands of functions with cryptic names and data structures, especially as we will have a lot of data structures and functions working on them with actually simillar tasks. That's just predestined for inheritance. But I would not insist on it, at least if it will help to bring some progress. > and also I don't understand where they want to go with > the code so I haven't contributed anything or don't realy know how. I guess you already browsed the CVS repository (http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/linuxsampler/). If you want to be able to check in you have to register of course first to sf.net and Benno, Steve or Juan must then add you to the developers of the project. > I am on the very edge of writing my own sampler framework, so maybe > we could start our own homebrew and then later see what could be > incorporated back into linuxsampler. My thoughts, though I'm still hoping someone will reply. Regards, Christian |
|
From: Christian S. <chr...@ep...> - 2003-03-14 20:46:19
|
On 13 Mar 2003 10:55:02 -0800 Josh Green <jg...@us...> wrote: > On Thu, 2003-03-13 at 04:56, Christian Schoenebeck wrote: > > > > As nobody answered if and how I could participate in this project, I > > just can make a suggestion to e.g. write an Akai and / or > > Gigasampler import module, as I haven't seen that anyone's working > > on it yet. > > > > I'm working on it right now actually with libinstpatch. I've already > written the DLS runtime objects and RIFF parser and DLS loader. All of Oh, I remember you mentioned the Gig format would be based on DLS, are you sure it is and if yes where did you get your information? Have you found some specs? And what about the Akai formats? > new features being added). If you would like to see this code for some > reason, its in the swami-1-0 branch in Swami CVS Hmmm... I could not locate anything DLS related, but maybe I did not look thoroughly enough. Have you already checked it in? Regards, Christian |
|
From: Mark K. <mk...@co...> - 2003-03-13 19:40:10
|
> Actually I have been experiencing similar frustrations. Also, I > don't really like the framework of the current distribution of > linuxsampler, since I would rather work in C (just a personal > preference), and also I don't understand where they want to go with > the code so I haven't contributed anything or don't realy know how. > > I am on the very edge of writing my own sampler framework, so maybe > we could start our own homebrew and then later see what could be > incorporated back into linuxsampler. > > Otherwise, if someone would explain to me where and how I could > contribute, I could work on linuxsampler and contribute some code. > However, from reading the source that's there so far it seems like > most of the organization is still in peoples' heads :) > > -Mitchell Mitchell, et. all, Making the presumption that I even know what the 'preferred' interface for Linux Sample might be (which I don't) there are other sample player motif's that could be of big value in my studio. One would be something like Native Instrument's Battery, which is more specifically for playing one shot samples vs. loops based samples, and gets used for drums a lot. There was a newer app that popped up on the radar recently called hydrogen, which is a Linux based drum pattern editor. If that was combined with something Battery like, then we'd have a cool combination for making noise in the neighborhood. Anyway, I'd encourage you to take you energy and get going on something you want to do. There are people waiting out here, like me, who just want to make music. Good luck, Mark |