Menu

#102 ADDsynth modulation overhaul

Unaccepted
open
nobody
5
2021-06-20
2016-12-03
unfa
No

I think I've figured a relatively simple way to lift the ADDsynth modulation to another level, and make it directly competitive to commercial synths like FM8 or Sytrus regarding functionality and ease of use.

Form what Mark McCurry told me, the audio buffers for voices in ADDsynth are computed from the 1st to the last.

Once the 1st voice is computed, the 2nd voice can access it's buffer to use that for modulation.

The 8th voice has access to all previous voices - the first voice has access only to it's internal modulation oscillator.

I wonder how FM synths like Yamaha DX7 (or rather it's opensource implementation: Dexed), Oxe FM sytnh, NI FM8 perform the modulation that goes in all directions or even feedback into the same operator?

Let's make a thought experiment:
(I'll call voice 1 "V1", voice 2 "V2" etc.)

I want to use V1 to modulate V2, at the same time I want to modulate V2 with V1

V1 --> (FM) V2
V2 --> (FM) V1

Once the buffer for V1 is computed, I can use that to modulate V2.

V1 --> (FM) V2

But if I want to modulate V1 with V2, I have no data - V2 wasn't yet computed for this buffer!
However - I can get the previous buffer that V2 generated and use that instead!

V2 (previous) --> (FM) V1 (current)

This creates a delayed feedback loop. And the delay is as long as the buffer. If the modulation is too strong between to two voices, a catastrophic noise explosion will probably occur.

My question for the developers:

Could all-directions cross-modulation in ADDsynth be implemented simply by adding a set of backbuffers for voices? Do real FM synths work this way? I guess so, as you can't feedback the same loops for infinite time inside one buffer - it makes sense to add a delay, otherwise the synths would never produce any sound - and the small delay is unnoticable anyway - unless you play the modulating operator and use it for modulation elsewhere - but it's still a huge functionality gain for such a small drawback (which is inevitable and and no one ever complained about that anyway).

Experminet nr. 2:

I want to modulate V1 with itself, creating feedback modulation.

V1 (previous) --> (FM) V1 (current)

Unless the modulation is so strong that it'll cause constant signal gain every buffer - it should be fine, ring? Will this work?

Another thing is allowing to use multiple sources for modualting a single voice. Using one Vocie to modulate a few others works right now.

V1, V2 --> V3

This can be done with the MORPH modulation - it can mix two modulators to apply, but it's very cumbersome and it also costs you an extra voice to mix two voices.

This leads me to an idea that actaully all modulation should be performed with previous buffers. This way every voice has acceess to every other voice and the modulation can be done in all directions, summing up needed voices to perform multi-operator modulation, and feedback modulation isn't a problem.

I've made a mockup of the Mod Matrix layout (attached) showing the (simplified) difference between what we currently have and what the backbuffers could add. The current matrix shows "x" where the modulation cannot occur without the backbuffers.

The mod matrix allows the user to dial in amounts of each voice being used to modulate each other voice or fed into the ADDsynth global section (audible output).

Every voice ca use a different type of modulation, butall modulators will currently be mixed together to perform that selected modulation type - becasue otherwise we'd need 5 matrixes (or 5-tabs in our matrix) for each modualtion type and it'd require 5 times more processing to do this.

Every voice * every voice * every modulation type.

Image-Line Sytrus has two matrixes one for FM and other for RM - but it only has 5 operators, not 8 like Zyn. FM8 has also 8 operators, but it only does FM (which actually responds like PM in Zyn - same applies for Sytrus).

So tell me - how crazy is this idea?

Should there be a feedback protection to avoid wrecking havoc on the user's ears when he dials in too much of feedback somewhere? Culd the limits for feedback modulation be switchable off to allow experienced (or risky) users to go beyond sane values? With automation this could produce interesting results.

1 Attachments

Discussion

  • unfa

    unfa - 2016-12-03

    Another thing is: what to do with the internal Modulation Oscillators? Ditch them? Keep them? I think they sometimes could be useful to do a very quick two-operator modulation, but then - they would be completely omitted in the MOD matrix, as it'd would become a monster and a pain to operate with them included.

    Removing internal modulation oscillators could be a cleanup, but also would drastically alter the workflow and unevitably break compatibility with the old patches (unless the can be converted to use external, instead of internal modulators.

     
    • Mark McCurry

      Mark McCurry - 2016-12-07

      On 12-03, unfa wrote:

      Another thing is: what to do with the internal Modulation Oscillators?
      Ditch them? Keep them? I think they sometimes could be useful to do a
      very quick two-operator modulation, but then - they would be
      completely omitted in the MOD matrix, as it'd would become a monster
      and a pain to operate with them included.

      Removing internal modulation oscillators could be a cleanup, but also
      would drastically alter the workflow and unevitably break
      compatibility with the old patches (unless the can be converted to use
      external, instead of internal modulators.
      In the grand scheme of things it's relatively easy to maintain
      backwards compatiability if these are removed.
      There might be some sound changes as I think the existing
      ext.osc/ext.mod.osc features use the same underlying wavetable or at
      least the same random seed for generating their wavetables.

      It would be nice to simplify some of the overall workflow if it makes
      sense in the end.

       
  • Mark McCurry

    Mark McCurry - 2016-12-07

    Could all-directions cross-modulation in ADDsynth be implemented simply by adding a set of backbuffers for voices?
    Yes, though it creates a series of notable drawbacks.
    One would be that the output is extreemely sensitive to the sound
    buffer size which should not significantly impact the output (though it
    does to a limited degree currently).

    Do real FM synths work this way?
    Not from what I've seen in the past.
    You typically have a microscopic feedback delay i.e. fixed 1 sample, or
    in the case of some better implementations you solve out what it should
    be with 0 sample delay or subsample delay.

    The typical zyn buffer size of 32-512 samples is simply too much here.

    Longer delays won't produce the 'correct' results though it might end up
    sounding good enough.

    Experminet nr. 2:

    I want to modulate V1 with itself, creating feedback modulation.

    V1 (previous) --> (FM) V1 (current)

    Unless the modulation is so strong that it'll cause constant signal
    gain every buffer - it should be fine, ring? Will this work?
    It will do something, but it's not going to sound anything like self
    modulating FM.

    Another thing is allowing to use multiple sources for modualting a single voice.
    Using one Vocie to modulate a few others works right now.
    V1, V2 --> V3
    To be clear, you're asking for
    modulation-source_{V3} = 1/2(mod-out_{V1} + mod-out_{V2})
    That sounds like a pretty reasonable extension given the latter
    discussed mod matrix.

    I've made a mockup of the Mod Matrix layout (attached) showing the
    (simplified) difference between what we currently have and what the
    backbuffers could add. The current matrix shows "x" where the
    modulation cannot occur without the backbuffers.
    Presenting a modulation matrix would be a good way of expressing this
    overall functionality to the users.
    Using back buffers though would result in subpar-audio output and a
    system excessively sensitive to audio driver settings.

    The mod matrix allows the user to dial in amounts of each voice being
    used to modulate each other voice or fed into the ADDsynth global
    section (audible output).
    Ah, so something beyond a binary matrix, you want control over the gain
    at each node.
    I can see how that would be useful, though a multi-tier UI might make
    sense here has I'd imagine people would only expect routings and not
    gains when starting out.

    Every voice ca use a different type of modulation, butall modulators
    will currently be mixed together to perform that selected modulation
    type - becasue otherwise we'd need 5 matrixes (or 5-tabs in our
    matrix) for each modualtion type and it'd require 5 times more
    processing to do this.
    It would be a lot more expensive than that.
    If you compare a non-modulating voice to a FM voice it's already 2-5x
    more expensive.

    So tell me - how crazy is this idea?

    Let's cover the ideas presented in this first post:
    1. Present the external modulator configuration as a routing matrix
    2. Allow multiple voices to be mixed via the routing matrix
    3. Allow self modulation
    4. Allow modulation of previous voices to create arbitrary feedback
    paths

    Idea 1 sounds like a solid idea and a good usability enhancement which
    should make understanding this functionality.

    Idea 2 sounds like a simple extension to the existing functionality and
    it's expressed well by the routing matrix.

    Idea 3 is interesting, but the technical solution proposed would likely
    yield complaints due to variability and not sounding similar to other
    synths, as it would not technically do the modulation as people would
    expect.
    While the idea behind implementation details for idea 3 don't seem like
    the right choices the idea of it as a feature is quite interesting,
    though it requires changing a significant portion of code.
    Modulation is already plagued by issues such as high levels of aliasing,
    so a significant code rework would need to be done here to get
    satisfactory results.

    Idea 4 depends upon the technical solution of idea 3.
    Aliasing becomes extremely significant here as it can feed back upon
    itself.
    Expect very significant DSP changes to be done to get satisfactory
    results here from the current code.

     
  • Mark McCurry

    Mark McCurry - 2021-06-20
    • labels: ADDsynth, ADsynth, modulation, mod-matrix --> migration-candidate
     

Log in to post a comment.