The control logic is the part that is presently (Jan 2017) missing almost completely.
Here is a draft of what it is supposed to be in the end.
The control logic is being passed the parse tree from the parser, after the tree has been completely built.
The parse tree consists of a few types of branches:
Especially the timing dependencies can become quite sophisticated. SoundComp is meant to allow for maximum flexibility here, which means it must be possible for each voice to have its own timing (tempo, ritardando, accelerando,...) and each voice may contain synchronization markers where one voice can "wait" for some event in another voice to happen.
Even more complicated will it get if SoundComp at some future point will become real time enabled and react on external events in real time. Since we don't want to exclude this from start, the timing calculation needs to be carried out while the whole processing is underway, each event being more or less relative to some previous event in the same or some other voice. A precalculation of start times of all events would make it easier, but completely shut out any future real time reaction option.
A precalculation is, however, possible and should be carried out for all passages not containing a synchronization reference from other voices and/or external input. When starting, all voice timings are calculated as far as up to their first "wait". After each successful "wait" the timing is calculated again up to the next "wait". Pieces of sound without external synchronization of voices will therefore still be completely precalculated.
As described above, timing options are meant to be quite flexible, for example, a voice may contain certain types of ritardandos/accelerandos (time-linear, tempo-linear, exponential, hyperbolic, and so on). This makes calculation of the length of each event a bit more difficult but is really necessary for a certain expressiveness.
The control logic contains a timing map where the upcoming events are put in as soon as their timing is determined.
The control logic finally contains the central "heartbeat" of SoundComp, i.e. the sample stepping logic that generates the 2-phase trigger signal that lets all presently active process elements carry out their calculations and output shifting. After each sample cycle, the timing map is checked whether an event start is due. If so, the process network of the event is connected to the voice adders/multipliers and the heartbeat signal.
Each event is responsible on its own to delete these connections again, once the sound processing of the event is past. This step is mandatory, failing to do so would severely slow down SoundComp (by a factor approximately related to the number of sequential events in the whole piece). This means that each event needs an understanding of when it is "finished"; a problem that might have influence on the language itself which might need to reflect this situation. The duration of the event will usually be somehow tied to the generating trigger, but continue for a certain time after the trigger is reset. This "after-gate" time is necessary because the sound of an event usually needs to continue for a while after the trigger event is over, i.e. a cymbal or string continues to give sound after the hit or pluck is already over. Even for instruments like a piano, where the string is damped once the key is released, it takes a certain time after releasing the key until the damping reduces the oscillation to a level below the background noise. Note that prolonging effects like reverberation usually need not to be considered here since they are not calculated individually for each event, but rather in a more global, event-independent part for a mix of events. These effects therefore have a more global lifetime - in a first instance, they might be considered valid throughout the whole piece of generated sound; reducing their lifetime is a less important optimization since there are much less of these elements compared to the event-related elements.