I think my understanding was essentially the same as yours, with the
exception of UG != node.
michael.gogins@... wrote on 2/8/09 11:15 PM:
> A music synthesizer or processor is, always, a signal flow graph. The arcs
> are signal flows from node to node. It is important to understand that a
> unit generator is not a single node. Each input and output of a unit
> generator is its own node, an inlet or outlet node.
I can see how this is necessary in the most abstract conception of the graph
which shows all of the detail of every operation (i.e. the internal workings
of the UGs). But it seems to me that in implementing this idea within
Csound that the primary node type will be a single UG with a number of
inlets and outlets. Any algorithms in Csound for traversing the graph or
analyzing dependencies for parallelization (etc.) will need to deal with UGs
as nodes, will it not?
> A signal flow graph is a special case of a data flow language. In a data
> flow language, the nodes are operations and the arcs are data flows from
> operation to operation. For a data flow language to be Turing complete, as
> Csound is, the operations must include sequence, branch depending on logical
> value, and loop.
> Because the graph is directed, the order of execution of the nodes and
> signal flows can be determined simply by traversing the graph.
Sure, I understand what graphs (and DAGs) are. Although it occurs to me
that Csound graphs are only acyclic due to the convention that instruments
execute in numerical order. If we take that away, then there may be some
situations where the graph has cycles that need to be broken either by some
sort of heuristic or by hints from the user.
> An instrument in a signal flow graph is also actually a subgraph, not a single
> node. Each signal flowing into the instrument is an inlet port, itself a node.
> Each signal flowing out of the instrument is an outlet port, itself a node.
> Max/MSP and Pure Data show this very clearly. An instrument can be considered
> a unit generator that is required to have at least one control inlet and one
> audio outlet. In Csound, an instrument must be able to dynamically allocate
> multiple instances of itself to handle overlapping control signals.
Sure, I don't think Csound instruments are part of the graph -- it's the
instrument instances that are. In addition, I have been assuming that
instrument instances in Csound are multiple nodes (subgraphs), as you say.
But I disagree with the idea that instruments have to have "at least one
control inlet and one audio outlet." I think that the limited conception of
an instrument in Csound -- an object that primarily takes an array of
constant pfields as input with optional connections to the orchestra-wide
audio inputs/outputs -- is one of the problems we are trying to solve in
Csound 6. This conception of an instrument is limiting since it is the only
object that can be instantiated from the score as a part of the signal
graph. I would like to see instruments have the same flexibility as UDOs
for defining signal inputs and outputs so that we have the possibility of
hooking together instrument instances directly in the graph in ways that are
only indirectly possible right now.
For example, it would be nice to be able to dynamically assign inline
effects to particular instrument instances from score statements without
having to setup zak or bus channels in advanace and then create a system for
managing how to patch them together. In order to make instruments more
readily reusuable and sharable, Csound should provide a flexible patching
system that is independent of instruments, i.e. the signal connections
between them should happen outside the instrument code block, not within it.
This system should also eliminate the need to worry about the numerical
order of instruments.
> Very rarely does a synthesizer follow this signal flow graph design all the
> way down to elementary arithmetic and logic operations. More commonly, the
> unit generators are "black boxes" containing C code whose public interface
> is a set of inlet nodes and outlet nodes. If you will, N inlet nodes are
> collected by one "black box" processor node, which fans out to M outlet
> In some DSP graphs, there are special unit generators for mixing. But this
> is is not necessary. In other DSP graphs, the inlet nodes can sum or collect
> any number of connecting arcs from outlet nodes, as analog gear can.
Sure. In Csound's case, it might be easiest to have nodes for mixing in the
graph, but these would be implied by routing/patching of the user's
instruments and not have to be explicitly specified.
I guess despite all of your explanations I am still confused by the
>> michael.gogins@... wrote on 2/8/09 11:38 AM:
>>> Modifications for DLI are logically independent of modifications for DAGUG.
>>> A unit generator in a DAGUG could have an abstract interface suited for
>>> loading from a shared library, and so could a DLI. Thus, a DAGUG could and
>>> should have two types of node interfaces, of which one would be a DLI.
Why would a dynamically loaded instrument be loaded from a shared library?
I am assuming the DLI idea is meant to be a means to add or replace
instrument templates while Csound is performing. I think that some of the
changes required to implement DLIs, such as retaining the names or orchestra
variables, will make it easier to implement flexible DAGUGs. And I am
guessing that implementing DAGUGs with my suggestions for making instruments
more like UDOs will require such significant changes to instrument templates
that it might be wise to consider these changes when implementing DLIs.
I will have to look at the code much more closely to get a sense of what is
really required though.