[pyswarm-devs] Getting the feet wet with MDA & Co.
Status: Alpha
Brought to you by:
ahatzis
|
From: Anastasios H. <ah...@ha...> - 2007-05-02 14:52:23
|
Hi, I have started to think about the re-engineering of pyswarm for mile-stone "Rocking Herring" (hope you like that name, too). The documentation of 0.7.1 [http://pyswarm.sourceforge.net/doc/0.7.1/discuss.html] already sketched a rough frame of the final release and some major improvements in the area of the SDK and the runtime architecture. Actually some of the SDK improvements have been implemented already, namely the GNU-style console options and arguments, an easier SDK installation, and (at least for the recent parts of the SDK code) the adoption of the PEP8-based naming conventions. As far as I see most of the features are not too difficult to realize. A rather simple way would be to stay with the exlusive support of the MagicDraw XMI 2.1 format. After some clean-up work of the SDK's code base (especially seperating the layers) we could move soon to the target architecture and runtime platform for which applications are generated. Despite this easier way I'm convinced that there are obviously good reasons to add the support for as many UML tools and XMI formats as possible. And this is where things become much more complicated. Probably I have read too much about meta-meta models in the recent days and therefor I started thinking about a generator-generator. - Well, we will come later to this topic. Before you continue with this post, you may consider reading my new post here: http://pyswarm.blogspot.com/2007/05/feetwetmda.html It contains an outline of the MDA-related standards so we get a rough impression of the big frame in which we want to paint our picture. I have also added links to specifications of the mentioned standards. I use some abbreviated terms that are also explained in that post. pyswarm's Current Design pyswarm SDK 0.7.1 has no support to store a PSM. The generator is only able to read the PIM, transform that to a PSM and this PSM directly into an ISM. The PSM is not kept for further use. The SDK is also not recording any of the processed transformations, so there is no data to create a TRM. The reason for this: The current design has not a clear seperation between the different layers, which would make it easier to store the PSM, not to mention of XMI-serializing the PSM model. Here a description how generation works now. There are three code parts involved in the generation process (beside the command handler in cmd_pysp_generate.py which controls the entire process and its phases): * sdklib/Parser.py and sdklib/ParseNodes/*.py * sdklib/Transformer.py and sdklib/TransformationNodes/*.py * sdklib/OutputNodes/*.py The Parser module controls the parsing process during generation. It begins with the top element which is the xmi:XMI tag found in the XML file, then the uml:Model tag and all the way down the XML DOM tree. Each of the XML elements found are verified for specific constraints, e.g. special stereotypes, before the reasonable implementation decides as what kind of ParseNode sub-class the element is treated. So, after the entire parse process we have a tree of parsed elements which are representing together the PIM as it has been parsed from the XML file. Therefor you can consider the ParseNode layer as a representation of PIM-specific XMI elements. The tree which results from the parse process is then taken by the Transformer module which controls the transformation process. It again begins with a top element and walks down the entire tree of ParseNode objects. For each ParseNode object found in the tree there are constraints verified and a transformation happens, actually a new tree will be created and filled with TransformationNode elements while the old tree of ParseNode object will remain unmodified. During output phase the top element of the TransformationNode tree is retrieved and a method output() is called on it. This method calls recursively output() methods of child nodes down the tree. The output() method of each TransformationNode (most of them has such method) is responsible to create the corresponding ISM element, e.g. directories, SQL files, Python modules, Python __init__ modules, packages, script files etc. The content of all files is generated from template sniplets that are scattered all over the TransformationNode modules and puzzled together with some basic frame content added by modules from the OutputNode package (depending on the type of file that is to be generated). So maintaining the code templates can become rather difficult since the hard-coded transformation rules (PSM->ISM) and the code templates are mixed and many of the code templates are crowded with masking signs that are masking masking signs that needs to be also generated. In short, this is a big mess. In any case for mile-stone "Rocking Herring" I want an improved design with clean seperation of the layers. Fortunetaly it is obvious how this could be done and it shouldn't be a challenge to do so. The Future Use-Cases The following use-cases do not necessarily represent commands, for example the generate command would consist of sub-sequent call of other commands which represent certain tasks (use-cases), depending on the concrete options provided. Example: read XMI file with PIM, transform PIM to PSM (with TRM), write PSM to XMI and return TRM as UML/Python tree. The sequence of the next generate execution could differ from this. In pyswarm SDK 0.6.2 I have started to introduce scripts to be used via command-line, which can access specific command handlers. I plan a new Python API for other clients, such as UML tools, which can access the command handlers like the scripts already do. XMI -> PIM XMI -> PSM XMI -> TRM PIM -> XMI PIM -> UML/Python tree PIM -> PSM (optionally with TRM) PSM -> XMI PSM -> UML/Python tree PSM -> ISM UML/Python tree -> PIM UML/Python tree -> PSM UML/Python tree -> TRM TRM -> XMI TRM -> UML/Python tree However, the fore-mentioned use-cases are suggestions, I don't think that this will be the final list. I think there are other use-cases or variations possible, e.g. using commands on entire models, selected model elements, one complete model in one XMI or Python UML tree, or parts of the model in multiple sources, and so on. I even thought about some kind of simulation engine that would work with-out a full generation of an application (perhaps with meta-classes), so models could be tested/verified in design time, e.g. in a UML tool that is a client to the new SDK API. If you have any comment or question, please feel free to post. Meanwhile I will do some studying on the several standards (still challenging to me) and I also want to learn and play with some Python features I have never used. ;) Best regards, Anastasios |