Tsunami Programming Language / News: Recent posts

added: description of classes heirarchy

i added a text file to the file repository and the svn repository, called "CLASS_HEIRARCHY.txt".

available at:


or via the subversion repository


Posted by Kevin Baas 2010-08-20

TODO: (top) assign dimensions to graph nodes

the basic algorithm to be implemented is as follows:

add int variable to nodes: sources_missing_dimensions.
add int variable to networks: inputs_missing_dimensions.

1. for each node, set # of sources that do not have complete internal dimensions assigned to number of sources (minus ones that are already assigned)
2. for each network, set # of inputs that do not have complete internal dimensions assigned.... read more

Posted by Kevin Baas 2010-08-18

it works!

so far. now i need to actually see if the compiled programs do whath they are supposed to.

Posted by Kevin Baas 2010-08-12

debugging what i've got so far

and oh, it's a lot.

but at the end is a few steps away from a functional interpreter.

i'll just need to write the code for connecting input and output file streams (the idea is to use files because in linux they can be character buffers, which is PURRfect), then write an evaluator that just pushes the data through the flow graph (DAG).

then it's back to testing and debugging.

Posted by Kevin Baas 2010-08-11

working with dimensions

Every variable in Tsunami is a multi-dimensional array. the dimensions are specified by a "dimension list", which looks like this: < a, b, c>. thus, for instance, a matrix multiply is written x<a,c> = y<a,b> * z<b,c>.

i've gotten to the point now where i really have to concern myself with the logic of how these dimensions are handled. so after some consideration, i've broken it down to a small set of concrete rules. i added them to the readme file, and i'm posting them here as well:... read more

Posted by Kevin Baas 2010-08-09

v0.119 - added DAG connection logic.

much closer to the first alpha release of the interpreter now. need to connect up the dimension hash tables (and check for mismatched) and make command line options, so that it can connect to file streams.

The connection functions are in the new class "Network", and are connectPostfix and connectStreamfix,.

Posted by Kevin Baas 2010-08-08

added stack-based syntax checker and anonymous channels

See the SVN (subversion) repository for the latest revision (under the "Develop" menu)

syntax checker:

public void checkSyntax(Token token, Stack<Integer> state_stack);

adds CompileError s to CompileError's errors set if it encounters any misplaced tokens.

anonymous channels are dimension hashes without identifiers:

source<b,c> | <b,c> | <c>

the parser will simply add an arbitrary unique identifier to them:... read more

Posted by Kevin Baas 2010-08-05

third stage of parser complete

See the SVN (subversion) repository for the latest revision (under the "Develop" menu)

now the next stage is to convert expressions to streams.

after that step, everything will be a stream, and it will be a straightforward matter to connect the streams together, thus creating the final data flow graphs for the middle end.

Posted by Kevin Baas 2010-08-04

second stage of the parser complete

See the SVN (subversion) repository for the latest revision (under the "Code" menu)

Onto the third stage:

* public void collectImplicitDeclarations();
* 3. for each remaining identifier token, check if it's in member_types list, if not:
* 3.1 look for prototype that matches,
* 3.1.a if found one, it is an anonymous net. give it an anon name ("___1","___2",etc.), replace token w/that name, and add to member_types list
* 3.1.b if not, it is a channel. add to member_types accordingly (as named channel).
* 3.1.b.catch if it has blank name (just a dim map), give it an anon name and replace token, then add to member_types as such.

Posted by Kevin Baas 2010-08-03

first stage of the parser is complete.

Posted by Kevin Baas 2010-08-01

they say a picture is worth a thousand words

"...but when i look at this, only one word comes to mind..."

anycase, yeah, i just posted a witty little pic in the screenshots section to sum up the main idea behind tsunami rather concisely.

Posted by Kevin Baas 2010-07-31

first attempt at using subversion

i'm going to try setting up subversion for this project. never done it before, know little about it. wish me luck.

update: subversion repository up and running.

Posted by Kevin Baas 2010-07-31

a sort of map of the code.


Dim {
int cardinality;

DAGNode {
Operator op;
orderedSet<Dim> result_dims;
orderedSet<int> result_dim_strides;

DAGNode node;
UnorderedSet<DAG> inputs;
UnorderedSet<DAG> outputs;


String code;


| parser


ProtoNet {
OrderedSet<DAG> inputs;
OrderedSet<DAG> outputs;
Hashtable<String, DAG> named_channels;
Hashtable<String, Dim> dim_hash;... read more

Posted by Kevin Baas 2010-07-30


Major releases:
*v0.0 - nothing
*v0.1 - ''current''
*v0.2 - complete interpreter(first alpha release)
*v1.0 - compiler to OpenCL

Minor releases (subject to change):
*v0.1 - ''current''
**make everything use "operators" instead of "stack operators"

**finish parser
**test interpretator on 0-dimensional streams

**create instantiator (multi-dim streams)
**test interpretation
**create source file reader
**create command line interpreter

Posted by Kevin Baas 2010-07-22

New website up!

We've just created our new website. It's a wiki that gives broader detail about what Tsunami is about and how it works. The URL for the new website is http://sourceforge.net/apps/mediawiki/tsunamiprogramm/index.php?title=Main_Page

On it, you will find:
*A simple tutorial of the Tsunami language, with code examples and comparisons
*A brief explanation of what Tsunami is used for
*A comparison of Tsunami to other solutions
*A brief explanation of how it works

Posted by Kevin Baas 2010-07-16

Tsunami goes open source!

I just started open sourcing the Tsunami Project.

--What it is---

Tsunami is a cross-compiler. It compiles code written in Tsunami into OpenCL.


The compilation can be broken up into three stages:

*front end - parse into abstract data structures
*middle end - optimize at the data structure level (e.g. common sub-expression elimination)
*back end - format into openCL

Currently, it's just going to be an interpreter, leaving the third stage thus for last.... read more

Posted by Kevin Baas 2010-07-12