Menu

Lost in Mumps

Help
2000-11-26
2000-11-27
  • Emiliano Heyns

    Emiliano Heyns - 2000-11-26

    I've been reading 'Mumps by Example', and there are still a couple of things that are unclear to me. <disclaimer>These are general Mumps questions, BTW, so feel free to redirect me to a more appropriate place if you know any. I'm asking here since I'm evaluating GT.M for use.</disclaimer>

    There are a couple of features that are crucial to my project that I haven't been able to pinpoint in Mumps:

    - How to get a reference to an exiting entity
    - Discovery/introspections (handed a reference to a random entity, how to find out what data hierarchy it contains)
    - Searching (using indexes for speed) for entities without traversing the entire data hierarchy
    - How to access chunked blobs, or how to find out the chunk size at runtime so I can do it myself.

    Basically, I want to use GT.M as a database, not a persistance engine. Possible?

     
    • Kate Schell

      Kate Schell - 2000-11-26

      Here's a start on responses to your questions:

      - How to get a reference to an exiting entity
      (Is that "existing entity"? I don't know anything about "exiting entities".)
      This very much depends on your database design.  One common approach to database design is to have one global that holds data, usually indexed by an arbitrary numeric ID, and to then have another global, or a section of the data global, that has indices.

      - Discovery/introspections (handed a reference to a random entity, how to find out what data hierarchy it contains)
      Check out the $QUERY function.  This gives you information on the global tree structure.  If you're looking for information on what data exists at each node, you need a data dictionary.
      - Searching (using indexes for speed) for entities without traversing the entire data hierarchy
      Check out the $ORDER function.  Say I wanted to find all people with the name "Schell" in a global with structure:
        ^NAME(name,ID_number)
      I would use the following code:
      SET name="Schelk"
      F  SET name=$O(^NAME(name)) QUIT:name]"Schell"  WRITE !,name
      - How to access chunked blobs, or how to find out the chunk size at runtime so I can do it myself.
      I'm going to have to leave this one to someone else; I've never used "chunked blobs".

       
      • K.S. Bhaskar

        K.S. Bhaskar - 2000-11-27

        > - How to access chunked blobs, or how to find out the chunk size at runtime
        > so I can do it myself.
        >
        > Basically, I want to use GT.M as a database, not a persistance engine. Possible?

        Ah, MUMPS!  The pleasure (and pain) of too many choices...  A hacker's
        dream and a purist's nightmare (since to a purist, every problem must
        have one, and exactly one correct solution, just waiting to be
        discovered).

        If you have a BLOB that you want to conceptually store at
        ^myblobs(<subscripts>), but can't because it is too large for the block
        size that you have chosen for the database file, you could store the
        BLOBs in chunks as ^myblobs(<subscripts>,1), ^myblobs(<subscripts>,2),
        ^myblobs(<subscripts>,3), etc.  Small BLOBs could either go in as
        ^myblobs(<subscripts>) or as ^myblobs(<subscripts>,1).  You could use
        the $DATA function to determine how a particular BLOB is stored, or you
        could put this information into ^myblobs(<subscripts>,0).  You can use
        $LENGTH to check the size of each chunk.  With an OO encapsulation, you
        could compute the length the first time it is requested, and store it
        in something like ^myblobs(<subscripts>,-1).

        M gives you a datastore, and it is entirely up to you you use it.  What
        M provides is an engine that gives you is a lightweight associative
        memory, with persistence and sharing, but none of the infrastructure
        associated with managing schemas, searching, etc.  This is what makes
        it so attractive for high speed transaction processing, and why
        traditional databases are more desirable for decision support and data
        warehousing.

        -- Bhaskar

         

Log in to post a comment.