From: Vesa <di...@nb...> - 2014-03-22 11:17:34
|
On 03/22/2014 12:30 PM, Johannes Lorenz wrote: > I am not sure if this is true. Actually, I have tracks with 30 ZASF entities, > and lmms takes only 6 % CPU power (no matter if played, or not). Yes and that's ZASF as it is now. Then imagine adding all the functionality of all the existing instruments in to the mix. CPU power isn't the only consideration, memory usage would go up exponentially as well. > Also, I > guess, you can alway select in a synth which parts of it should be used, and > which not. A good example might be ZASF (it has 3 basical synths, you can > select those that you need). So then, instead of having several synths you can select in the instrument plugins tab, and add to the project, we have one instrument in the tab which you add to the project, then open it and select the instrument in there. How is this then any better in practice? We'd still have all the different synths, just a lot worse GUI. >> - code complexity: this would be a nightmare to maintain > Probably it would be better, and not worse: We could handle all subparts > modular. So with N subparts and 1 synth, we get N modules. With p synths of > each N subparts, we get pN modules. No, it wouldn't be any better. Trust me. And here's the thing - lots of the parts of the instruments are ALREADY modular. We already reuse lots of code in instruments. Firstly, all of the instruments are based on the instrument class which has all the things like figuring out volume, pitch, etc. and function calls for getting per-note buffer output. Many of the instruments themselves use objects and functions provided by LMMS - Oscillator class for rendering that comes with builtin modulators and waveforms, SampleBuffer class for playing sample data or custom waveforms, and so on. Then each native instrument also gets to use the builtin soundshaping class - this is visible to the user as the ENV/LFO tab. It's a module that all native instruments use for sound shaping, this is again very good code reuse - all the native (non-MIDI-based) instruments use the same code for filtering. That's the thing, LMMS already has the things which you hope to achieve with this - LMMS comes with great ready-made building blocks for creating synths, effects, etc. This is great because code reuse, but also because it makes it easy for even less experienced people to code great new plugins for LMMS. Look at GIMP - there are thousands of plugins, scripts, etc. available for it. GIMP doesn't ship with all of them and doesn't try to mash them all together into one multitool - instead, there's a thriving community of users developing and sharing plugins, scripts, etc. I'm hoping one day LMMS would be in the same situation - where we'd have so many native LMMS plugins that it wouldn't be convenient to ship them all by default, instead the bulk of them could be downloadable on the sharing platform or elsewhere for users to share. > - compatibility: implementing new features in a backwards compatible way > would become a nightmare > Why? Well right now, if you want to add new functionality for producing sound, you can just add a new instrument plugin without breaking anything in existing instruments - they still work in backwards-compatible ways. But, think about one huge monolithic beast - if you have to change something, then it affects all of it - it quickly gets out of hand when we have to write compat code for a zillion cases of old versions of this multi-instrument. It's safer to keep things modular. Think of the UNIX philosophy: Do one thing and do it well. > See my point about code complexibility. If we still want different synths, > they can easily share their modules, so we get more modularity, not less. We don't get more modularity by merging things together. That's not how modularity works. > >> Vibed actually has a wavegraph, and allows loading samples as waveforms. >> You can use any waveform you want in Vibed. > Yes, but this is not useful. It is useful for using custom waveforms in Vibed. You can draw any waveform with the mouse in Vibed, or you can load a sample file as a waveform, just like you can in Bitinvader. You complained about not being able to use custom waveforms in Vibed, I just pointed out that you can indeed do this. > In ZASF, I can change a waveforms parameter, then > hear, then change a bit, hear, etc. If I want to load waveforms, I'll need to > first generate them (ZASF would be the best choice, I guess), then save, and > then load them. This is a lot of work to find a suiting waveform. So? That's the way ZASF works. It's an additive synth, you're not supposed to think of it as "waveforms" - you need to think in terms of fundamentals and harmonics when you do additive synthesis. > It gets worse if I want to modulate the wave form while playing. This is > impossible, since the WAV file is static. If you want to modulate a custom waveform, you can load it up to triple osc and modulate it there (or use my new synth, once it's released). It doesn't matter if it's a wav file, once you load it up as a waveform, it becomes just another oscillator waveform. > With wavetable I just meant that the basic oscillator (which is the > first thing in an ad synth) switches between different waves. So this > could be part of an adsynth. These things are, AFAIK not different, > they are independent. That's not what wavetable means. Wavetable means, simply put, a digital oscillator that plays a custom waveform saved in the digital oscillator's memory. The graph is the "wavetable" and is used as the waveform. Bitinvader is a 1-oscillator wavetable synth. Oscillator switching between different waveforms would be, I don't know, waveform modulation or something. Or just plain stream mixing, depending on how it's implemented I guess. Additive synthesis uses multiple sub-oscillators and *adds* their output together to form a single audio stream. It's how many analog synths work. You need to understand, that all sound is formed of sine waves, and you can create literally any waveform by adding enough different-frequency sine waves together. The downside with additive synthesis is that it tends to take more CPU power than simple wavetable synthesis (because it uses many oscs to create one voice, as opposed to wavetable which just looks up from the wavetable). So they're both different things, neither is "better" or "worse" than the other, they're just used for different purposes. To achieve different types of sound. > There are many things that probably can not go into FX. Some use > properties of single keys pressed, like frequencies. An effect does > not know when a key was pressed, neither it knows the base frequency > of such. I think you can not put WT synthesis, pulse/frequency > modulation, portamento etc. into an effect. No, you can't. But so what? That's why we have the ENV/LFO tab. The ENV/LFO works on a per-note basis and can be aware of note frequency, velocity, etc. We can put pitch modulation, portamento, etc. there - and it's been discussed earlier. For the rest, we can just write different instruments for different purposes. We can use controllers to control almost any parameter of every instrument. Here's a challenge: show me anything that can be done with Massive or any such monolithic synth that couldn't be achieved with current LMMS instruments/effects. > > The "small" synths are good for learning, They are good for a lot of things, even advanced things. By combining, mixing and matching, you can do with them almost anything imaginable. > and removing them violates backward > compatibility. I just think we should start re-using modules until we get > close to having only one synth. Well then you already got your wish, like I said above - we already re-use a lot of modules under the hood. But by all means, code your beast of a synth - I'd love to see what you come up with. There can never be too many instruments to choose from. |