In trying to better understand how the NORMA tool works, I've started to look at the supporting files generated by the custom tool. Are both OIL and OIAL variations of the Ontology language? Are they seen as core components of the NORMA framework, or are they temporary expediants, that will be replaced as development continues? Thanks BRN..
OIAL stands for 'ORM Intermediate Abstraction Language', sometimes called the 'ORM Intermediate Absorption Language', and represents the absorbed form of the ORM model. You can see this name in the Extension Manager dialog. I believe oil is used only as a prefix in the xml, and as such is arbitrarily chosen because it is short and easy to type (the only significant xml prefixes are xml and xmlns, the only thing that matters on all other prefixes is th namespace they are bound to in the local xml file). The O does not stand for Ontology.
OIAL is present because direct transformation from ORM to any attribute based form (relational, entity, OO, etc) requires the ORM model to be absorbed into major object types. While ORM models are very stable, incremental changes in the ORM model can produce large changes to the major object types.
We currently have three implementations of ORM->OIAL:
1) The xsl transform ORMtoOIAL.xslt produces the OIAL xml that is used as the basis for our code generation and ddl transformations. Although it has served us well, the downfall to this approach is that there is no way to customize any of the generated mappings. For example, one role can map to multiple DB columns. Until we know the columns, we can't individually customize the column names.
2) The currently installed OIAL extension that loads when you add the Relational View extension. This generates an in-memory form of the OIAL model that is regenerated whenever the model changes (hence the speed hit mentioned in another post). This allowed us to show a preliminary relational view, but suffers the same problem as the xsl approach, namely that any customizations on the generated objects are lost when we regenerate.
3) An under-development piece that we call 'Live OIAL' on the team. The goal of this effort is to make incremental changes to the in-memory OIAL in response to changes in the ORM model. This will fix our performance issues and allow any customizations to be persistent. Persistent cusomizations open the door to things like live ER and UML views on the ORM model, controlled denormalization of the relational model, etc. When this is done, the other two will go. #2 is being kept intact so that we can compare our incremental change state to our state after a full OIAL regeneration.
So, OIAL will be a permanent part of the NORMA product. However, the xsl-based generation you see now will be an historical artifact once the live incremental mapping is completed.
Thanks for the detailed explaination of ORM>OIAL,OIL. In some ways, I guess I was barking up the wrong, yet vary similar, tree with the ontology reference - especially as I've seen so many definitions for ontology, that I'm sure your OIL would be coverd by at least some of them. The insights as to how you use ORM>OIAL are helpful.
There's another question that would probably be better in a seperate thread, but should be Ok here: How much does the team use ORM and NORMA in the development of NORMA? Do you use VEA (being stable), to model the project? Do you do a conversion between ORM and ORM2 notation? How complete is the conceptual model at this point?
Thanks again. BRN..
At the moment we're not bootstrapping with the tool, but this doesn't mean it won't happen in the future. We've modeled our object model with DSLTools, and are using them for some code spit, and PLiX for other parts. We have done preliminary work generating a .dsl file from an ORM model, but aren't to the point where we will be spitting rules from it.
Also, there are some special considerations in the object model because of its use in a dynamic development environment. For example, EntityType and ValueType don't exist (a ValueType is an ObjectType that participates in the ValueTypeHasDataType relationship). This allows us to easily switch from an EntityType to an ObjectifiedType to a ValueType without blowing away the underlying ObjectType element in the object model. Some of these relationships are not 100% in line with the conceptual equivalents to make them more fluid at design time than the underlying concepts they represent.
So, short answer, is no. We are not using NORMA for bootstrap development at this time. The tool is written in C# on the DSLTools framework with lots of XSL-generated C# code thrown in the mix. The generation layer is predominantly XSL (see HKLM/Software/Neumont/ORM.../Generators to see how we pick up our generators).
Could you please consider the value of building the
schema representation and transformation API as a
stand-alone artifact? I had email with Terry last
year where he indicated that one of the things you
were working towards was to build a proper meta-model.
Such a meta-model can then be used to derive the API.
Some of us (me :-) want to build tools to manipulate
ORM schemas, but the lack of an accepted API and meta-
model is a serious impediment.
The meta-model must include the ability to model the
absorbed (OIAL) form, and preferably to model both the
pure ORM form and an associated transformed version in
the same dataset, as you're doing with Live OIAL.
You probably need to map the metamodel to an API
manually, since the existing mappers expect to create
a relational form and an ORM-ish API for that form.
Such a mapping could incorporate enough support for
custom annotations (through a factory pattern, for
example) that you can attach whatever custom code you
need in NORMA.
Needless to say, the stand-alone API must function
outside the Dev Studio environment.
There are several ways to look at an ORM Meta Model:
1) An .ORM model of an ORM Meta Model
2) An in-memory implementation of the model
3) An XML schema holding a persisted form of the model
2 and 3 are already available. The in-memory model, although still DSLTools based, will load without VS. All we require is a store that implements the IORMToolServices interface, which abstracts out actions such as where to output validation errors. Nothing in the core model (or the extension models) is supposed to care whether it is running in a VS DocData or not. We already do a command-line model load in our testing framework (Look for 'LoadFileStream' in ORM2CommandLineTest/ORMDocServices.cs). The way to put in custom code is by generating additional extensions. The code base has an ExtensionSample, and several pieces (LiveOIAL, RelationalView, Custom Properties, etc) are all done as extension Dlls. The core framework doesn't give the the core and shape models any special treatment. Extension models can add properties, validation errors, etc. We have not defined UI tweaks to the core shapes at this point, although theoretically you could add your own shapes to the designer [this might have XSD issues on load].
As for the XML, it is likely to change in the future as it has in the past. We already import 2 old NORMA file formats plus the OrthogonalToolbox XML. This is all done with XSLT and integrates seamlessly with new formats (see the ORMDesignerSettings.xml file in the root install directory and the transforms in the XML\Transforms\Converters directory). We have not plugged the reverse process into the back-end generation (an ORM file is always considered to be the current format), but will probably do so the next time we have a format change. The goal is that that any work you do now in the way of generation transforms will still work if we upgrade the file format. Bottom line is, if a full fidelity transform exists from one format to another, I think that the time spent to find the perfect format gets in the way of moving forward in other areas. Yes, it would be nice to have a consistent format across tools, but I don't think that this should block any forward progress on any tool.
> For example, EntityType and ValueType don't exist (a ValueType
> is an ObjectType that participates in the ValueTypeHasDataType
On a 2nd look, this is just an instance of the subtype mapping
option "absorb subtypes into supertype", one of the three ways
to map subtypes. I wouldn't apologise for doing it - it's a
perfectly valid physical mapping.
Is anyone there? I thought I might get some comment on at least one of my three messages here...?
Things on this forum go in fits and starts. One thing you could try is to repost your questions in a seperate thread - they may have gotten burried in the OIL/OIAL thread.
BTW, have you looked through the files in the 'Neumont' directory that is installed with the CTP release in your 'Programs' directory on your system during install? I only noticed the files last week; having being anxious to use the new version, I hadn't looked to check for other support files. Maybe there's something in the XML files there that will give you part of what you're looking for. Good luck. BRN..
Yes, thanks Brian, I've svn'ed the whole source and had a few attempts to crawl through
bits of it. It wasn't clear to me that you got an answer to your question "does attribute
absorption cause loss of information". If you've read Terry's book (the ORM "bible"),
you'll know it describes the Rmap algorithm and you'll understand what data must be lost
in going to an ER schema.
Time flies when you're fighting bugs (both the code and winter varieties). The problem is that good questions and comments cannot be answered quickly. Anyway, thanks for the encouragement on my choice of physical model. Sometimes the implementation models, especially design time stuff which needs to support partial states, makes modelers nervous. Then again, there are lots of things routinely done on the data-modeling side that make a long-time OO guy like myself cringe (a supertype knowing its full set of subtypes and storing data for them just doesn't feel right in abstract OOP, etc). Anyway, I guess there's a reason that we all still experiencing friction trying to integrate hierarchical/relational/object/visual representations of the same information. It will continue to be a non-trivial exercise for a long time to come.
We'll likely put out a new drop next week before another external presentation (this time at Microsoft). This will probably be the last CTP before we tear up the file format/data types/ref modes. I'm pretty swamped, so don't expect much from me on the forums for at least a week. -Matt
> Time flies when you're fighting bugs (both the code and winter varieties).
Of course, no problem. It'd be good if we could get a bigger community using NORMA,
as it would get more questions answered sooner with perhaps no net load on the
project members. I think the forum style is possibly the worst available however.
I maintain mailing lists on Yahoo, and they're excellent and simple to run. I
know this has been discussed before, but I really think this forum stands in the
way of wider adoption.
When you say "rip up the file formats" etc, do you have a statement of what the
goals are in making these changes? Just curious, and would like to comment before
you launch into a big chunk of work.
We have a student working on documentation this quarter and should have some help (integrated into VS) by the end of the quarter. There are also some walkthroughs in file downloads that Kevin is updating as I type.
As for format changes, we'll be reworking data types and reference modes. The data types will be definable in the ORM model, including what I'm currently calling a 'datatype facet fact type'. Things like Scale and Length will be refactored as facets on data types defined in the model, and you'll have the ability to define your own custom data types (and eventually map these to custom types at the SQL and code levels). This is not 100% worked through yet, but it will be a big improvement over the current system, which has been a good placeholder but is blocking us from getting the generated DDL to the level we want it.
Other changes will simply be name changes. We try to only do this when we are also making semantic changes. Introducing a new file format is a bit of a hassle, so we try to do all of the changes at one time. There are several names in the file that vary semantically from what they actually mean in ORM. A lot of these names were pulled from other ORM XML representations (specifically the orthogonal toolbox-generated) that were also a little lax in their naming. A preliminary list of top-level changes:
Fact --> FactType
Object --> ObjectType
FactTypeInstance --> Fact
ObjectTypeInstance --> Object
SubtypeFact --> SubtypeMetaFact
ImpliedFact --> ImplicitFactType
ObjectifiedType --> ObjectificationObjectType
Ok, thanks for that Matthew. I should say that I am commencing
an ORM project myself using the Ruby language, and one of the
goals is to read files from NORMA - mainly because I have no
desire to build a diagramming tool. I want to build much of the
rest of the stack, including things you haven't done yet. But
suffice it to say that I've read your XSD pretty carefully...
I agree with you changing the datatype facets, I'd already done
I'm still working on the model API, and as part of validating
that, I've also produced an ER meta-model for ORM, in which I've
adopted many of your terms while adapting the structure. The ER
model has many absorptions, such as ValueType, ObjectifiedType
and EntityType are all rolled into one ObjectType; unique and
mandatory constraints are just instances of frequency constraint;
ValueType objects have a unary Role for which a RoleInstance may
be recorded; I've modelled value restrictions as sets of allowed
value ranges; etc... I'd be happy if you wished to review my schema
actually. It's not that I need an ER schema, but it's a way of
validating a related API.
It's this project that makes me wish you had completed meta-models
and a proper language grammar, rather than just XSD and XML. I did
find a mostly-complete ORM metamodel in the source repository. I
think that the Ruby community will really take to ORM once there
are free tools (that don't require VS2005 Pro!).
Your claim that "unique and mandatory constraints are just instances of frequency constraint" is only party true. You may treat a uniqueness constraint as a frequency constraint of 1, but a mandatory constraint cannot be reduced to a frequency constraint. A frequency constraint of n on a role says that any instance playing that role appears n times in the role's population. A simple mandatory constraint on a single role says that each instance in the population of the object type playing that role must indeed play that role. The two constraints are orthogonal to each other.
For example, consider the fact type "Person drives Car" and suppose that in our business domain each person either drives two cars or doesn't drive a car at all (an unlikely rule I know, but let's suppose it anyway). In that case, the role played by Person in this fact type has a frequency constraint of 2, but is also optional (no mandatory constraint). If we add a mandatory constraint to this role, it means that every person must now drive a car (in fact 2 cars, because of the additional frequency constraint).
ORM is different from UML in separating the two concepts of frequency and mandatory rather than trying to combine them into a single multiplicity constraint. Apart from orthogonality advantages, this enables ORM to correct deal with mandatory and frequency constraints for all possibles cases regardless of arity -- in contrast, UML runs into problems with various cases for n-ary associations.
Nice to see you here and thanks for the comment on mandatory
constraints being orthogonal. I wasn't sure I hadn't missed
something there, but I do recall this issue (mentioned in the
book) now. I'll add mandatory constraints back in, in the form
of a boolean on a FrequencyConstraint (min and max are both
I'll email you a mapped ER diagram of my meta-schema, if you
or someone can find time to look it over. This schema is for
validation, I won't be implementing it as ER, but something in
between this and the very detailed ORM meta-model in the source
I like the proposed name changes; they add clarity. The last one ObjectifiedType --> ObjectificationObjectType, though I see the correctness of it, seems too cumbersome. I'm afraid it will be abreviated in use and documantation, and the effectiveness will be lost. Symantically, Objectification implies object creation, so there's some redundancy there. Doesn't ObjectificationType cover the bases? BRN..
Taking another look at the response to my question on OIL, I take it that the 'absorption' required for attribute based form entails information loss from the ORM model in the transformation process? The ORM model is then richer than attribute based forms, and the customization problems are a result of the generalizations needs to get ORM into an attribute based form. This makes the transformation basically one way (if your correct shoe size is 10 1/2, and you only have the choice of 10 or 11, you'll choose the 11; but knowing you bought a size 11 shoe doesn't provide your correct shoe size).
Sounds like the Live-OIAL objective is to keep the ORM model in focus as an active or 'live' reference, and the transformations an ongoing process as a result. In the case of a RDBMS, the schema could be fluid (conforming to changes in the ORM model), yet the data already in the DB would not be compromised by the changing schema - as long as the new iterations of the ORM model remained valid in the domain. If I'm even sort of right about this, that would be terrific! I've felt for a while that the sort of 'fire and forget' process of creating a data model, generating a schema, then leaving the DB to fend for itself, lacks the dynamic interaction between model and schema that would make relational databases (and datamodels), much more useful.
Haven't found a definition for 'Major Object Types' you mentioned, but I think I know what you mean. If so, then the problem with conforming to them is the need for their selection in an apriori manor - where ORM is not concerned with predefining these, as they follow naturally as a result of the ORM methodology.
In any case, interesting stuff. Even if I've missed the mark with my take on it, I'm pretty sure changes you hope to implement are going to have a major impact on the usefulness of the NORMA framework. BRN..
You should read about the relational mapping procedure in Terry's
book "Information Modeling and Relational Databases". It makes a
lot of these things clear.
The relational mapping procedure, in very short form, converts the
entire schema to co-referenced form (objectified relationships,
subtypes, etc, all get mapped down), then it picks which objects
will become tables (the "Major Object Types") and absorbs all the
values and foreign keys into those tables.
I don't think there is necessarily any information lost, though
some may be maintained only in the form of comments, like which
things were subtypes, etc. The actual transformations in NORMA
do lose some information that isn't used at present - I recall
seeing it as I was reading the XSL's looking for the cause of a
defect I encountered.
Wrt Live OIAL, what the industry really needs is a live ORM database
engine, that hides the relational mapping entirely, and restructures
the hidden relations as you change the ORM schema *and* as it
observes the performance requirements of the actual query load.
This latter optimization would allow the ORM mapping to flip between
possible mappings based on query costs and the cost of the necessary
But Live OIAL is a good first step.
Yes, being able to have the full implementation stack automatic (include DB migrations as the model changes) so that the user and programmer could focus directly only the conceptual model would be a great place to be. Just getting back to the ActiveQuery/CONQUER stage would be a great first step. One step at a time. Incremental RMAP changes against a live model is keeping us pretty busy. We did some work on textual constraints (which require a textual query language, which is the basis for live conceptual query) last quarter and will pick it up by the end of this one (a couple of students are moving the FactEditor to use newer interop libraries, and should have time to start grammar/parsing/editor work on the other language elements by the end of the quarter).
Any thoughts on how you would like a Linq conceptual query to look? (probably on another thread). -Matt
Hope you feel better soon. Do what you need to get back on even keel.
There actually is a good deal for people to experiment with in the current release. The frustration is partly do to the kind of people that you are dealing with - curious, intelligent, imaginative people that can think of lots of possible ways to do stuff with the functional parts you've provided; and partly do to the limited documentation and timely support.
Also, history (that of ORM and ORM tools), has left a subtantial gap in the choice of ORM tools. From what I gather, VEA (through VS 2003 EA), is the last fully stable ORM tool (following the same line of notation, etc...) out there for production use; and that is tied to VS 2003 (raising issues). That version was undocumented by MS. From the time VEA was developed, to the present is only a few years; but those are technology years - dog years times seven? Not hard to see why people that embrace the ORM philosophy, would be anxious for a fully functional and stable (let alone updated, enhanced, and fully documented) replacement for the older tools.
I'd expect those pressures from the consumer side to continue as long as the development project does. You and the team will deal with that as you see fit. My recommedation would be to pass along more information (in release notes and forum postings), even at the expense of some time taken from writting code - but I don't know the constraints you all are under.
In any case, get well and good luck with the work load. BRN..
Log in to post a comment.