Web Based nORMa

  • Jason Barnes
    Jason Barnes

    Noah and I have discussed this idea a bit. With the release of SilverLight why not make a very basic version of Norma that would work on the web? There have been so many times when I wanted to put a quick ORM diagram of an idea together when I was at a computer that did not have Visual Studio. All it would need is a factor editor box that would let me create an ORM diagram and then show me a Relational view of it.

    I can imagine ORM getting a lot more exposure by having an online tool that wouldnt require VS and a Plugin.

    Just some wishful thinking :)

    • Hi,

      I think a lot of what would make SilverLight a suitable platform for a web-based NORMA tool, will only be realized with SL version 1.1.]; version 1.0 (just released), is pretty much a way to get into SL coding.  Version 1.1 requires .Net 3.5, and that's not even slated for the 2008 versions of VS, due next Feb.  Still, NORMA being in CTP, it shouldn't hamper efforts to start laying out some prototypes - by those with the time and skills.

      We're lucky here, in New England, to have live developer support in the form of Microsoft .Net Roadshow team events, on top of the regular MSDN, TechNet tours.  We just had one last week, and got some more insights into SilverLight status and expectations.  There's also a Remix in Boston early next month.  I hope to get to at least one of the two, day long, sessions.  The next visit from the .Net Roadshow here will be in December,  and I'll try to a web based form of NORMA in mind when during the presentations.

      BTW, I've just started looking at LINQ.  It could be that this omni-data query initiative will complement the cross browser/platform characteristics of SilverLight; and together provide a new way of exposing NORMA.  The LINQ book I picked up is not in a style that clicks easily for me, but I'm getting the gist.  There should also be extensive LINQ material at the Remix event - and I hope to get a better handle on the way LINQ may fit in with the NORMA approach.

      The umbilical tie of NORMA tool to VS, while frustrating for some, has the benefit of the NORMA team being close to MS developments.  I don't expect there's anything I'm learning about MS initiatives that they haven't already been exposed  to, and is under consideration by them.   BRN..

    • Clifford Heath
      Clifford Heath

      I'm working on a text-only ORM tool as a single-page web app,
      using Ajax in Rails. I have a full back-end database design to
      hold a complete ORM Model (including some extensions I think
      are important, like aspects, multiple populations, etc), and
      I'm just now fleshing out all the display, actions, and pane
      refreshing. I'll be making use of drag and drop, so it should
      wind up quite usable. The application is currently called "APRIMO".

      However, it won't have NORMA as a back-end. I plan to write my
      own absorption to produce relational models. I have opinions of
      my own about some of the things that the relational mapper should
      do, that aren't necessarily shared by the NORMA team :-). My
      back-end database can represent a relational model already, but
      I haven't worked out how I want to store the mapping between the
      conceptual and relational models (i.e. how tightly to couple them).

      I'll probably build Java and/or C# reflection tools that you
      can download to extract the design of an existing database out
      into a file you can upload to the website to work with. I already
      have a version of this in Ruby that works with MS SQL, Oracle,
      and DB2. At the moment, after extracting the schema, it only
      reverses it part of the way back to a conceptual model. In any
      case you'll always need to add additional conceptualization for
      where the relational schema couldn't represent the model properly.

      Eventually, I want to introduce a conceptual query builder and
      a process modeler to the same tool, so that each process step
      can specify its input parameters, its data requirements, and
      its output values all as conceptual queries.

      It's possible to consider adding a web-based graphical editor
      to the same back-end, written using Flash, SVG, Silverlight or
      Apollo, but I have no current plan to do that. Others have
      expressed interest in helping when I get the project to a
      workable stage.

      How useful would a text-only modeling tool be to you?

      Clifford Heath.

    • Hi Clifford,

      Sounds like you've been busy!  I envy your activity, skills, and the time you have been able to put toward this topic.

      As for a text only modeling tool, it will be interesting to see how how completely you can model various types of domains (and to what modeling objectives), using only text based methods.  My guess is that you may be able to cover all the ground serviced by a text and graphics based tool, but there will be instances where a graphical representation would be much more efficient and intuitive (just as there are times were text is better).  I think I mentioned before, that a system that would let the modeler glide between text and graphic representations, on the fly, would be optimal.  In order to get there, a complete representation in each form would be required.  Your effort should go a long way toward showing how feasible that objective really is.

      It's likely going to come down to personal preference, but I for one, would like to have fuller text based support for fact entry.  Good luck with your efforts.  Hope we'll have a chance to see for ourselves how usable a text only fact based modeling tool can be.

      My opinion on reverse engineering relational models to conceptual models is that it's not feasible for anything but the most simple and obvious schema.  As I think you pointed out, something is always lost in the process.  A conceptual model is formed by intention.  A data structure is only the result of some intension (deliberate, well thought out, or otherwise).  Starting with the result, reverse engineering can only propose one set of intentions that would (at best), have the logical conclusion resulting in that data structure - how can the process determine if that set is the correct one?  The marketing incentive is obvious: leverage the data structures you have by putting them on a rational and verifiable basis - retroactively!  I just don't think the information flow map can be made to go in that direction.  When the Russians reverse engineered interned U.S. B29 bombers during WWII, the knew the thing was intended to fly, carry bombs and drop them on targets.  Only a full understanding of the machine's design intentions made engineering working copies feasible.  BTW, I'm guessing you've tried to reverse engineer ORM diagrams that you modeled and later generated using various ORM tools.  Ever seen the R.E. result look anything like the conceptual model you started with?  Also, you can only know if the R.E. model is right if you fully understand the U. of D.  - and if you do understand it, you can surely forward engineer (model), it anyway.  Well, that's my take (rant), on the value of reverse engineering to conceptual models.


    • JO3Y

      Everyone has made some very good point here. I'll add my two cents from a new initiate's perspective:

      The graphical diagram is most suitible for getting the big picture, conceptually, but subtle little things like constraints are easy to missunderstand. That's where the readings become so valuable to me.

      I prefer using the textual fact editor to create new Object Roles - especially a series of related ones, although something like Intellisense (TM) would be even nicer. But aftward, I spend way too much time cleaning up the semi-random places and alignments they take on the diagram. This could be alleviated by allowing a ctrl-click on the screen where you want the new fact type to appear (instead of using ctrl-enter), then subsequent objects and or roles could be aligned horizontally, neatly under the previous one. I'm sure there are other styles people might prefer, vertical/horizontal, stair-step, or whatever, and these could be set as defaults or easily toggled.

      As for R.E.'g a database, of course it does not create a complete conceptual design, especially without interpreting it in the eye of the UoD expert. But it would very useful when you need to enter representations of externally referenced systems that will not be changed, as is the case in the project I am currently working on. It would save a lot of time setting up names and datatypes. It is still necessary to fit the external schema into the conceptual model, but it would be nice to not have to recreate or fix every detail of it on the logical-physical level.

      One last thought; What are the prospects for getting one of these tools to work in say, Eclipse or SharpDevelop for the Mono project? My understanding is that, at least in Universities, Eclipse is much more widely used than Visual Studio in Europe. Entire governments there are turning away from Microsoft and mandating the use of Linux and OpenOffice, for example. Exposing ORM to those developers would be good for ORM, I think.


    • Hi Joe,

      Your point about using R.E. to include external (fixed structure elements), into a model is well stated, and I can see where you would find it valuable.  However, I think it's a matter of putting the process in the wrong place.  What I think would be of better use, is an improved process for UofD discovery and description for terms and rules (the part of the ORM methodology that T.H. talked about as "familiarizing and verbalizing."  I don't know that anyone has done much toward supporting this part of the process programmatically.  Fixed structure elements would then be just more elements in the UofD - terms and relations (rules), to be modeled.  This could be used to incorporate elements, from an existing data structure, in a way that should work as a natural part of the process of creating a conceptual model.  Rather than shove the terms, and a guess at the rules that relate them, back up to the conceptual ORM model (where they might collide with proper UofD elements), bring them out and around to the proper start of the modeling process - where the modeler can oversee their incorporation in a valid form, that doesn't conflict with the other terms and rules of the rest of the UofD.

      I really like the (start here -->) kind of layout for adding rules to the graphical representation you suggest.  That way, the modeler can pick a spot on the diagram where he/she believes the fact types will fit neatly.  Having a block of related FT's dropped in as a list may be harder (if they share object types).  I wonder if some type of new page creation (with external OT designations), could be used for block additions of related FTs?  I opt to use pages to cluster FTs related to some central OT anyway; having a way to create them in text, and drop them in an orderly manor on a page (as a sub-model), would be handy.  These are the types of things that would add value to a tool, but come down to convenience, but there are alternatives (even if more taxing), so I guess they're really wish list items, rather than must haves.  I have no grips with M.C. and the others for focusing on better DDL generation, at this point.

      I also apologize for drifting from the Web based nORMa theme of your thread.  A separate thread on on R.E., and one on graphic element placement are warranted. 


    • JO3Y


      I totally agree that, ideally, it would be better to focus on a good conceptual model for fixed structure elements. as a matter of fact, if you've read my posts on modeling a bad database design, you know how I lamented that making a good conceptual model does not necessarily produce an accurate representation of the physical structure of that bad design. what's important is that the overall conceptual model IS correct, and that there are ways to correctly interface the fixed structure in with it. after all, we do not need to re-create the fixed structure parts, because presumably, if we have to work with it, it's because at least at the application level it's working as advertised and can be considered conceptually correct for its part.

      as for the rest of the methodology you mentioned, you're right. I wouldn't know where to begin. the problem kind of reminds me of certain expert systems I have seen - i mean the part that is designed to pick the experts' brains. i have to admit, i cut corners here more than just about anywhere else, mainly because i don't have time-saving tools for it and can't even imagine what a helpful tool for this would be like. although collaborative systems come to mind.

      regarding block fact type additions, when there is a shared object, it seems to me that this fact would make it easier to automate because the first fact type serves as an anchor for subsequent fact types:

      A has X
      A has Y
      A is  Z

      the predicates and objects line up naturally in text because they all are anchored to A. Invariably, when I create such a block graphically, I end up lining them up just the same way (or reflected to the left side of A).

      that's not to imply that I don't appreciate all the hard work that's gone into the latest release. There's a lot to do and only so many people to do it and only so much time in which to do it.

      I've known for a long time that my greatest talent is that I know a good thing when i see it, and ORM is great, and NORMA is going to make it greater, as well as projects like Clifford's (shouldn't you guys share code??) - Keep up the great work!