Menu

Language

Jan

SoundComp Language

SoundComp contains a kind of preprocessor that is meant to be used for manipulation purely on the text side,
i.e. as a means to abbreviate things and to avoid being overly redundant.
This preprocessor is not really part of the language itself and not described in this chapter.

The SoundComp Language is the language that the compiler expects to find in the text that the user provides.
Text written in this language is then compiled to actual sound output.
The language needs to have means to express every aspect of the sound and music that SoundComp is meant to create.
Information needs to be provided in a strict formal and unambiguous way.

Necessary information includes:
- processes: all parameters that the sound generation needs for producing sound of the different voices/instruments
- mixing: all aspects of per-instrument and global sound processing ("mixing", "production")
- notes: information on which sound has to appear where in what pitch and parameterization

In the file "tools/soundcomp.grammar" that is being created on building SoundComp you find a formal description of all language elements; however this only helps in understanding the syntax in concrete cases but will not easily make you understand all the concepts.
Nevertheless you may find yourself looking into this file every now and then when writing SoundComp text, until you have memorized most of the necessary structures or have created a library to include elements that you repeatedly need.
Elements in capitals denote language terminals, i.e. the most simple elements (keywords, numbers, primitive symbols...). Note that outside this formal description, terminals usually are not written in uppercase.
Elements in lowercase letters denote non-terminals, i.e. elements that are being composed by combining other elements (and in the end of the resulting grammar tree, composed of terminals).
Actually this file is a version of the parser control file "tools/grammar.y" simplified by reducing all SoundComp internal actions from the grammar rules.
Do not mess up this type of description with the probably more useful excerpts in this page and its subpages, where terminals are written as they appear in the language (lowercase) and non-terminals are enclosed by < and > and always explained.

All terminals are recognized by the scanner which is made up from the flex control file "tools/soundcomp.flex", so to get an exhaustive list of terminals (keywords etc), you might also succeed by looking in there.
Technically, most terminals represent either signal processing elements (mathematical operations) or event data like notes.
For each type of signal processing element exists a designated terminal symbol.

-- preliminary information --

The SoundComp language contains of several stanzas that relate to the aforementioned fragments of information.

The overall structure somehow represents the list of necessary information listed above.

At the beginning there is some global parameterization, most prominently digital sound needs a defined sample rate and possibly a start tempo but in the future there may be more global parameters added.
Other globally available data would be user defined scales.

Then it describes a list of "processes" which are actually representations of a process to create a single note/event of a voice from a signal processing network.
Additionally, for each "instrument" there is an instrument-global process definition that is applied to the sum of all simultaneous signals of a voice.

At the end of the signal generation description there is a special "totally global process" that describes how to connect all these processes to an overall "mix".s

Optionally, before defining tunes, you may have to define "scales", i.e. tables that connect event names to other parameters (usually to a pitch/frequency or in some cases a sample file). A standard "well tempered scale" is already predefined and used as default, you need not specify this.

After having thus defined the "how to sound" comes the "what to sound", i.e. the "voices" as sequences of notes and events - there will be constructs to allow some parallel action within a voice, to allow for easier notation of chords and similar things that you will for sake of simplicity not want to write as a collection of separate voices.
Among the notes there can also appear some intonation information - a "velocity" parameter that you might use in your process network, and some event internal timing information that you can use for example for keyed instruments to define on when during the duration of a note a "key" is being "pressed" and "released". Both can be specified globally for a whole voice, or be changed for each single note event.
Sometimes you will even want the "key press" to happen a small time span before the actual note event time (negative delay) - this is, of course, not compatible with real time playing reacting on external control (MIDI or similar if it will ever be implemented in SoundComp) as it would be a breach of causality.

Also, you might insert one-shot events that trigger certain reactions in your process network (you think of it for example like a certain point in time when a player of an electric guitar pushes a button or turns a knob to change his sound).

Furthermore, among the event list of a voice can appear more global timing information like a tempo change. A tempo change can appear globally, e.g. affect all voices simultaneously in the same manner (actually that is the most common why to use it) or as a tempo change for this voice alone; this rarely makes sense, but in a certain sheet music I've seen a comment over a guitar voice like "play the following 12 bars about 3% slower than the rest of the band, so that the last 5 notes occur one bar delayed compared to the other instruments", and such constructs are then also possible.
Side note: such way of playing easily happens unintentionally for beginners, but are extremely difficult to reproduce reliably when done intentionally.
The band doing this reproducably also during live sessions exhibits a high expertise level.

Since after such a part, one voice is now out of sync, it must be possible to get it in sync again without painful timing calculations - therefore we also need to allow for synchronization commands: undefinition of voice local tempo, definition of synchronization events and commands that let a voice wait for exactly such an event happening in another voice.


Related

Wiki: Language-globalprocess
Wiki: Language-process
Wiki: Language-voice

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.