Fast piping of Shell Command output

  • robtweed

    robtweed - 2006-12-14

    The standard way of accessing OS information from within GT.M is to use zsystem calls, pipe the output to a file and then open and read the file contents.  I found that if I did this a lot then it added a significant overhead and was pretty slow.  I think I've come up with a much slicker and faster mechanism.  It works just fine for me in all the situations I need to use it. 

    First you need a simple MUMPS routine that will just read and copy to a global with some upper limit set, eg (here arbitrarily set to 200 max):

    shellPipe    ;
        n i,x
        k ^%mgwPipe
        f i=1:1:200 r x q:((i>20)&(x=""))  s ^%mgwPipe(i)=x

    Then invoke it as follows from within GTM:

    zsystem command_" |mumps -run shellPipe^xxx"


    zsystem "ls -l |mumps -run shellPipe^xxx"

    ....and the output ends up in the ^%mgwPipe global

    Put some locking round the global so multiple people can use this trick without clashing, merge it to a local array, delete the global and remove the empty lines at the end of the array and there you go!

    • James A Self

      James A Self - 2006-12-15

      Hi Rob,
      Very interesting. What led you to think that piping input to a global would be faster than to a file?

      How much overhead and how much of an improvement do you see? Do you have some simple tests I could run to compare with your results?

    • robtweed

      robtweed - 2006-12-15

      Hi Jim

      Been a long time since we conversed!

      Actually it looks like my idea isn't an improvement at all! :-)   I just tried some comparisons of piping via a file versus a global to get some file information (size and modification date) from the ls command.  In a 1000X loop, the global mechanism took 38 seconds, whilst the file mechanism took 30.  Each + or - 1 seconds on each attempt.

      So....maybe piping via a file is actually still the fastest way to go.  However, I do like the global pipe mechanism, since you get the information directly into GT.M with a lot less messing around - the processing logic is a lot cleaner IMHO.   The question is, is it worth that extra 0.08 seconds!?

      I do think it would be nice to have a faster, more elegant way to pipe the stdout channel into GT.M to capture OS information.

      By the way Jim, I'd be interested in getting your thoughts and feedback on the GT.M-based Virtual Appliance we've made available - see

      • Dennis Ballance

        Dennis Ballance - 2006-12-15

        The downside to the GT.M pipe is that you have to instantiate a new instance of GT.M to process the input (so you end up with a process stack that looks like shell->GT.M->shell->GT.M). Of course, the performance comparison between starting a new GT.M instance and a pipe in the filesystem may vary between OSs, too, but I would expect file piping to be more efficient. Have you compared using named pipes instead of files?


      • James A Self

        James A Self - 2006-12-17

        I am interested in learning more about it. Is it easily available without VMWare so I could run it on servers already running GT.M and Apache on Debian?

    • robtweed

      robtweed - 2006-12-16


      I haven't looked at named pipes yet so I wouldn't know how to use them for this kind of thing.  Could you give me some examples of how they could be used to achieve the effect we're trying to get here?  I could then implement some tests to compare with the file and global pipe techniques



    • robtweed

      robtweed - 2006-12-18


      Yes absolutely, you could run all our software on your own Linux/Apache/GTM server(s).  I haven't yet packaged up the GTM versions of our products separately however, and to be honest, the easiest way to install them is to grab the VM and extract our stuff out of it (eg by FTP). The VM will give you a reference installation so you can see what goes where.  The main documentation web page in the VM's portal tells you what components have been installed where and how they've been configured for precisely this reason.

      Also the VM is a useful pre-packaged environment to try out all our software and find out what it can do, how it works etc - there's a whole set of tutorials, examples and documentation loaded into the VM to guide you through our software.  So I'd recommend using the VM for evaluation/learning etc, and then build your own server if you want to.

      Just click on the announcement panel at and follow the download link.



Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks