i added a text file to the file repository and the svn repository, called "CLASS_HEIRARCHY.txt".
available at:
https://sourceforge.net/projects/tsunamiprogramm/files/
or via the subversion repository
the basic algorithm to be implemented is as follows:
add int variable to nodes: sources_missing_dimensions.
add int variable to networks: inputs_missing_dimensions.
1. for each node, set # of sources that do not have complete internal dimensions assigned to number of sources (minus ones that are already assigned)
2. for each network, set # of inputs that do not have complete internal dimensions assigned.... read more
so far. now i need to actually see if the compiled programs do whath they are supposed to.
and oh, it's a lot.
but at the end is a few steps away from a functional interpreter.
i'll just need to write the code for connecting input and output file streams (the idea is to use files because in linux they can be character buffers, which is PURRfect), then write an evaluator that just pushes the data through the flow graph (DAG).
then it's back to testing and debugging.
Every variable in Tsunami is a multi-dimensional array. the dimensions are specified by a "dimension list", which looks like this: < a, b, c>. thus, for instance, a matrix multiply is written x<a,c> = y<a,b> * z<b,c>.
i've gotten to the point now where i really have to concern myself with the logic of how these dimensions are handled. so after some consideration, i've broken it down to a small set of concrete rules. i added them to the readme file, and i'm posting them here as well:... read more
much closer to the first alpha release of the interpreter now. need to connect up the dimension hash tables (and check for mismatched) and make command line options, so that it can connect to file streams.
The connection functions are in the new class "Network", and are connectPostfix and connectStreamfix,.
See the SVN (subversion) repository for the latest revision (under the "Develop" menu)
syntax checker:
public void checkSyntax(Token token, Stack<Integer> state_stack);
adds CompileError s to CompileError's errors set if it encounters any misplaced tokens.
anonymous channels are dimension hashes without identifiers:
source<b,c> | <b,c> | <c>
the parser will simply add an arbitrary unique identifier to them:... read more
See the SVN (subversion) repository for the latest revision (under the "Develop" menu)
now the next stage is to convert expressions to streams.
after that step, everything will be a stream, and it will be a straightforward matter to connect the streams together, thus creating the final data flow graphs for the middle end.
See the SVN (subversion) repository for the latest revision (under the "Code" menu)
Onto the third stage:
* public void collectImplicitDeclarations();
* 3. for each remaining identifier token, check if it's in member_types list, if not:
* 3.1 look for prototype that matches,
* 3.1.a if found one, it is an anonymous net. give it an anon name ("___1","___2",etc.), replace token w/that name, and add to member_types list
* 3.1.b if not, it is a channel. add to member_types accordingly (as named channel).
* 3.1.b.catch if it has blank name (just a dim map), give it an anon name and replace token, then add to member_types as such.
see
https://sourceforge.net/apps/mediawiki/tsunamiprogramm/index.php?title=Main_Page#The_parser
for a description of the parser stages.
"...but when i look at this, only one word comes to mind..."
anycase, yeah, i just posted a witty little pic in the screenshots section to sum up the main idea behind tsunami rather concisely.
i'm going to try setting up subversion for this project. never done it before, know little about it. wish me luck.
update: subversion repository up and running.
===================
PRIMITIVE DATA STRUCTURES
Dim {
int cardinality;
}
DAGNode {
Operator op;
orderedSet<Dim> result_dims;
orderedSet<int> result_dim_strides;
}
DAG {
DAGNode node;
UnorderedSet<DAG> inputs;
UnorderedSet<DAG> outputs;
}
==================
MAIN DATA STRUCTURES
String code;
++++++++++++++
|
| parser
|/
++++++++++++++
ProtoNet {
OrderedSet<DAG> inputs;
OrderedSet<DAG> outputs;
Hashtable<String, DAG> named_channels;
Hashtable<String, Dim> dim_hash;... read more
Major releases:
*v0.0 - nothing
*v0.1 - ''current''
*v0.2 - complete interpreter(first alpha release)
*...
*v1.0 - compiler to OpenCL
Minor releases (subject to change):
*v0.1 - ''current''
*v0.11
**make everything use "operators" instead of "stack operators"
*v0.12
**finish parser
**test interpretator on 0-dimensional streams
*v0.13
**create instantiator (multi-dim streams)
**test interpretation
**create source file reader
**create command line interpreter
We've just created our new website. It's a wiki that gives broader detail about what Tsunami is about and how it works. The URL for the new website is http://sourceforge.net/apps/mediawiki/tsunamiprogramm/index.php?title=Main_Page
On it, you will find:
*A simple tutorial of the Tsunami language, with code examples and comparisons
*A brief explanation of what Tsunami is used for
*A comparison of Tsunami to other solutions
*A brief explanation of how it works
I just started open sourcing the Tsunami Project.
--What it is---
Tsunami is a cross-compiler. It compiles code written in Tsunami into OpenCL.
---Design---
The compilation can be broken up into three stages:
*front end - parse into abstract data structures
*middle end - optimize at the data structure level (e.g. common sub-expression elimination)
*back end - format into openCL
Currently, it's just going to be an interpreter, leaving the third stage thus for last.... read more