Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Since Version GT.M 4.x I use it intensively and like to admit, that after a while of getting used, GT.M is a really great M.
Fast and flexible, just the database stability should be improved.
Overall I really can suggest to use GT.M.
This just as an introduction.
Ok, 3 years ago I started writing the first webserver using GT.M.
Meanwhile, I achieved to write a quite usable (though not optimal) program.
I do have a problem: I'm not able, to spread the WEB-traffic on several jobs/tasks.
So parallel requests are handled sequentially, and under heavy load I encounter very long waiting periods.
This is what I integrated into the routine with the kind help of Frans Witte:
s %ZNPort=80 ;Default
n $ZT s $ZT="ZGOTO "_$ZLEVEL_":singleQ" ;ET
e s %http="-1,NotOpen" q
s $ZTRAP="ZGOTO "_$ZLEVEL_":singleC",%http=0
u %ZNDev s %http(0)=$KEY
W /LISTEN(1) s %http(1)=$KEY
FetchSocket u %ZNDev k (%ZNPort,%ZNTimeS,rootdir,startpg,postrtn,vdotrtn,stoprtn,%ZNDev)
s %http=0 f d q:%http i $$stop c %ZNDev H
. W /WAIT(%ZNTimeS)
. I $KEY]"" s %http(2)=$KEY,%http=2,%http("IP")=$p($KEY,"|",3) q
; Store the connection socket in local variable,
; Close listen socket, so another process can start listening on this port. ???
; Force connection socket to be the active ???
s %http("usedev")="U """_%ZNDev_""":(SOCKET="""_%ZNSock_""":NOWRAP)"
It works very well so far.
The device (%ZNDev) and the socket harmonize well.
From the device I can read data using a webbrowser and return the result as HTML-text.
Now my question:
How may I feed the current socket (working) by another job/task?
Or in other words, how may I reach, that in parallel to the current feed I can feed other sockets?
I've examined the GT.M. documentation intensively and tried whatever came to my mind.
But whenever I try to close the device, or to detach the current socket, the connection to the webbrowser is interrupted.
Who could help me with this issue, please?
With GT.M, you cannot pass a socket from one process to another. That functionality does not exist. My suggestion would be to deploy GT.M based servers under the control of inetd/xinetd.
Can you tell me more about your words "the database stability should be improved" please? As far as we are aware, the database itself is rock solid.
1. If you kill a process with a kill -9, there are small windows in which damage can theoretically occur. Are you killing processes?
2. If you power down the system (e.g., pull the plug), the database file on disk can be structurally damaged. But if you turn on journaling, the database can be recovered on power up, so there should be no stability issue there.
3. If you are calling code in C, then bugs in that code (such as wandering pointers) can of course result in database damage.
So, an explanation of what you are doing to experience instability would be great appreciated. Thank you very much.
inetd/xinetd is not optimal for me, as I use exclusivly GT.M and performance and security is excellent.
If the sockets cannot be moved from one process to another, HOW may I then for example spread the traffic of port 80 on several processes to be able to handle several requests at once?
Sorry, I'm of your opinion, under normal circumstances the GT.M databases are "rock solid" and very fast.
Out of question.
But if I disconnect a computer from power while it runs, at least one database is broken.
This is what I meant with stability.
If the GT.M databases would survive this worst case without errors, they would be perfect.
There are M-implementations, that can handle this using a "Global-Hardening".
The journaling on a 24/7 system with lots of data-movement is no optimal solution.
The journals grow very fast.
At moment I work around these problems, by completely copying the whole database at runtime (mupip BACKUP).
There are always 3 copies available, that I may use in a desaster situation.
Thanks for the fast reply and best regards,
Frans S.C. Witte
The socket device supports multiple concurrent opens.
So you can use the JOB command to start additional servers.
When I still worked for DIC Information Consultants we had an application that read over 100 messages/sec from a stock-exchange feed. We had 4-6 concurrent processes reading from the same socket device.
See the SCKSERV1 example.
By closing the listen socket the next GT.M JOB will be able to "open the listen socket":
; Close listen socket, so another process can start listening on this port.
; Force connection socket to be the active
Frans S.C. Witte
I don't understand the journaling concern. The default size of a journal file is 4GB, but you can make it smaller, e.g., 100MB. You can also set up a cron job that every hour or so executes a mupip set -journal command. You can periodically delete any previous generation journal files (the *.mjl_*) files. So. the amount of disk used for journal files can be bounded. But in this day and age of cheap disks, I am puzzled that you are concerned about a few GB of disk. If you run BEFORE image journaling, when powering up the machine, just execute mupip journal -recover -backward and it will use the journal files to recover the database files. That's how GT.M implements stability from yanked out power cords.
An alternative, if you are concerned about disk usage is to use NOBEFORE journaling. With this, to recover the system, you would apply the journal files to a backup with a mupip journal -recover -forward command.
Also, a better way to implement business continuity is to set up logical multi site configurations. But, let's walk before we run, and set up journaling first.
GT.M version 6.1 added the ability for a parent process to pass a SOCKET device to a child process in a JOB command. See change GTM-7322.