Menu

use HTTP for the basis of the protocol

Greg Stein
2000-05-29
2000-07-02
  • Greg Stein

    Greg Stein - 2000-05-29

    I would highly recommend making your protocol an extension of HTTP/1.1.

    HTTP has been specifically designed to be extended. WebDAV (www.webdav.org) uses this feature of HTTP to implement its extra functionality. The extension scheme works *very* well.

    Benefits? Authentication. Proxy/firewall navigation. Existing tools. Existing knowledge. Existing infrastructure.

    For example, take Neon (www.webdav.org/neon/) for the client library. No more need for a custom client library (no need to write networking code!). (of course, Python users would use httplib.py and Java and Perl users would use their stuff (dunno)). On the server, it would just be an Apache server (skip writing a server!), and the actual OQ code would be a CGI or an Apache module. You could skip all the authentication/authorization coding and concentrate on OpenQueue functionality. You wouldn't have to worry about portability/optimization of network code. Threading and process handling would be taken care of. etc.

    I'd be happy to address any concerns people may have.

     
    • Paul Matthews

      Paul Matthews - 2000-06-12

      > Proxy/firewall navigation.

      Is exactly why we shouldn't use HTTP! Its up to the administrators of the firewall to decide what goes though and what does not. I don't see how tunning over HTTP would make firewall administration any easier.

      No it's definatly a new application, and a new protocol so it should, eventually, run on a well known port number.

       
      • Greg Stein

        Greg Stein - 2000-06-12

        The administrator still has full control over what goes through the firewall. How? Because you define a new HTTP method. Everybody knows GET, POST. Well, HTTP/1.1 details how new ones are created (e.g. proxy behavior, client and server behaviors when they are seen).

        For example, WebDAV created MOVE, COPY, PROPFIND, PROPPATCH, LOCK, UNLOCK, MKCOL.

        You would define a new one, and the administrator would allow or disallow that method to go through the firewall.

        AND you still retain all the benefits, tools, and experience of HTTP. WITHOUT reinventing the wheel.

         
        • Matt Jensen

          Matt Jensen - 2000-06-12

          This is a very interesting proposal, and in fact in 1998 someone proposed GENA, an event notification architecture over HTTP.

          Do you have in mind a way for the server to send clients new messages as they become available, without the client polling for them?  Some people have suggested that the server keep the HTTP connection open, and just keep streaming messages out to the client, but I don't understand how the client could control the flow of them, nor how it could signal success or failure in receiving the messages.
          I think it's very much worth investigating, whether as a main transport system or as a complementary one to a separate protocol.  Do you have suggestions for the above issues?

           
          • Greg Stein

            Greg Stein - 2000-06-12

            There are three ways for a notification to occur:
            1) leave the pipe open, the server sends "spontaneous" messages
            2) the client polls
            3) the client listens on a socket

            Options (2) and (3) are pretty self-explanatory. Both forms could use an HTTP extension for the wire protocol.

            Option (1) is the real question here. HTTP/1.1 defines the 1xx responses as "informational." The client makes a request, the server sends zero, one, or more 1xx responses, then the server completes the request with a 2xx,3xx,4xx, or 5xx response. Using this pattern, the client can send a request to the server, say, OQ-LISTEN stating that it is now listening for events. The server spits out, say, "160 Understood" and holds the connection open. It can then follow up with "161 Event Occurred" messages, with some key data in the HTTP headers (no body!).

            When the client is done listening, it can simply close the connection. When the client wants to make another request, it gets a bit tricker (read: I'm not sure on what "should" happen here). The client is certainly free to send a new request down the connection. The issue is whether the server will respond with a "200 OK" for the OQ-LISTEN or just blow that off and respond to the new request. I'd have to check with Roy on his thoughts there. The basic problem is that the server needs to "complete" the OQ-LISTEN before processing the new request. IMO, I'd say that it sends the 200 OK (or another 2xx response such as 204), then proceeds to deal with the new request.

            An alternative is that the client submits the OQ-LISTEN with an "Upgrade: OQ/1.0" header, signalling a change to a different wire protocol. The server responds with "101 Switching Protocols". After that, you can use a custom protocol to send the events. Before making the next request, the client can send a message over the wire to revert the connection back to HTTP/1.1.

            Sounds complicated? I hope not. All this is standard HTTP and you could build it using existing HTTP client libs and Apache. (the wire protocol switch would be tricky, and Apache may time out connections on you; but that is an implementation issue rather than a design problem)

            Back to options (2) and (3). Option (2) could define OQ-POLL which would "pick up" current events for the client. It would be a standard request/response HTTP transaction. No biggy.

            Option (3) has problems with firewalls/proxies. The server might not be able to contact the client.

            Cheers,
            -g

             
          • Greg Stein

            Greg Stein - 2000-06-13

            [ Forgot to respond to the bit about client responses to the events. ]

            Presuming that the client needs to proactively respond to events (for guarantees and non-repudiation), then you definitely want some kind of request/response system. Or you could send N events in a block and have a single response.

            There are certainly a number of asynchronous mechanisms for dealing with the client's ACK/NAK response. Personally, I would recommend for a simple request/response HTTP-based system. The client can send "200 OK" for an ACK, a 3xx, 4xx, or 5xx code for a NAK, or a "202 Accepted" which says "I've accepted your request, but don't have an answer at this time." The client is then expected to signal the server at a later date with ACK/NAK. (of course, the server will retry N times with delays of M until an ACK/NAK is heard)

            In my mind, async events from the server to the client (notifications) are simply a client/server system in reverse. I'd recommend using the "Upgrade" header mentioned in my other note to alter the protocol; this change would be to reverse the client/server notions. This implies that the client is going to need to open a new connection to make a request since it cannot proactively flip the original connection back.

            (hmm... stream of consciousness going on above... blue sky)

            This seems to imply that a client should open two connections to the server. One for requests and one for events. The event connection gets reversed with an "Upgrade" header.
            [ note that the server cannot simply open a connection to the client, per my other note: firewalls would get in the way here ]

            Yes, this means that a client is going to have a limited HTTP server on their side. But any kind of notification system is going to have some kind of listener on the client.

             
        • Paul Matthews

          Paul Matthews - 2000-06-13

          1. Maybe you could demonstrate with IPChains how I filter different types of HTTP requests?

          2. How do you get the w3c to add the new types?

          3. WebDav is related to "hyper text". How is sending 1.3Gb of packed binary data between mainframe installations in any way considered "hyper text"? Message Queuing is NOT hypertext. It may cay "hyper text", or it may carry bank transactions, or is may carry client information, or RPC requests.

          4. What benefit does HTTP have to real time messaging systems. The w3c papers examine in detail how poor a protocol HTTP is. How does as robotics control benifit from HTTP?

          5. How do you data/muticast in HTTP?

           
          • Greg Stein

            Greg Stein - 2000-06-13

            Not sure if I should respond to this. It is getting near a "troll" level. But, here goes...

            1. You cannot set up an IPchain to filter some HTTP methods and not others. You need an app-level proxy/firewall. e.g. the stream needs to be parsed. HOWEVER, you can configure your web server to accept the OpenQueue methods on port N, and regular web methods on port 80. Set your IPChains to filter port N accordingly.

            2. The w3c does not need to add any new types. HTTP/1.1 *allows* for these kinds of extensions to occur without anybody's oversight. It is part of the HTTP design. [btw, protocols fall under the IETF's purview, not the W3C; the W3C deals with content] Now, it would be /nice/ to issue an Internet Draft specifying the extension, but there is no requirement to do so.

            3. Incorrect. WebDAV is content agnostic. It has nothing to do with text. One of the companies that I've worked with is using my mod_dav module to create a WebDAV front end to their RDBMS. On this infrastructure, they plan to use DAV for managing video, audio, etc in the database. They are thinking in terms of *many* gigabytes. Outlook Express uses WebDAV to talk with the Hotmail servers; while a mail message could be considered "hyper text", I'd say that is a stretch.

            4a. OpenQueue is not real-time. From the project page: "OpenQueue is an open protocol for publish-and-subscribe message queuing. This enables language-independent, loosely-coupled, asynchronous communications between applications on platforms supporting TCP." As such, an HTTP-based protocol is ideal (IMO).

            4b. Please provide a reference -- I would be very interested in reading the W3C's comments on HTTP. I can bounce their points off Roy (Fielding) to get his thoughts on the W3C's issues.

            4c. Dunno. I haven't thought about that problem space w.r.t. OpenQueue, and your question is a bit open, so I can't wing an answer either :-). Presuming you saying the robot requires real-time control, then just about any protocol is going to have issues. UDP is subject to dropped packets. TCP is subject to the same drops with (potentially) lengthy timeouts.

            5. HTTP requires a reliable transport; it is not bound to TCP. It also uses a request/response paradigm which does not map well to multicast scenarios. Regardless, I don't see this point as highly relevant to OpenQueue.

             
            • Paul Matthews

              Paul Matthews - 2000-06-13

              Yes. Sorry. I've been on 16hr shift, 7 days a week for 6 weeks now. I've lost all of my humor and most of my sanity. See how you feel after writting 9Mb of cobol source in 6 weeks. (Yes. really.)

               
            • Paul Matthews

              Paul Matthews - 2000-06-13

              I've had dinner, and some caffine, and regained some sanity. Yes, definatly out of order.

              The problems with the HTTP protocol are outlined in the SMUX papers. SMUX is an experimental protocol the w3c is working on. The problem with it is that is tries to do too much. SMUX is designed to replace all tcp and udp and RPC type traffic. It expects that the firewall (ie packet filter) will unbundle the protocol, look inside and filter based on what it finds inside for the individual channels.

              The problem with an application level firewall is not only the speed, but at this point it is often 'too late' the damage was done before it reaches the high level firewall. The other problem is 'opportunity in complexity'. For example the SMUX protocol defines the uses of UTF8 encoded strings, the code they have is correct up to a point, it does not know how to handle anything outside of the basic mapping plane. A good opportunity for someone to break in using 6-byte UTF8? Possibly.

              Take my machine at home for example. It has a ipchains rule that drops all packets that come to it from outside that are trying to talk X11. I don't rely for xhosts, because at that stage its possibly already too late.

              If the people who run the firewalls here at work discover that other protocols (say openqueue or 'http telnet') where tunneled over http, then HTTP will be pulled.

              The main problems with adding a new GET/POST type protocol are name space pollution and proxies. Say we add GETMSG and SENDMSG, what's to stop w3c from defining these to mean something different in the future? The other problem is that between me and the outside world right now, are three firewalls and four proxies. If the proxies see anything they don't recognise, they /dev/null it.

              To me is a 'fish' vs 'foul' thing. HTTP and WebDAV are 'fish'. Openqueue is 'foul'. Telnet is 'beef'. Different applications are what different port numbers are for. If you run all application over the same port number, then your just moved /etc/services somewhere else, and firewalling to much weaker position.

              Hmmm. I what I need now is 'beer'.

              --
              Paul Matthews

               
              • Greg Stein

                Greg Stein - 2000-06-13

                I believe the port number (see other post) solves this issue. Please re-specify if that isn't the case.

                re: namespace pollution. Sure. But who else will define OQ-xxx? I'd guess the chance is near zero.

                You want to be really safe? Write that Internet Draft that I was referring to. That will stake out your ground quite effectively. Want to be doubly sure? Get the I-D thru the IETF as an actual Proposed Standard.
                (it's work, but the only reason you're doing it is to defend against the possibility that somebody else might use "your" HTTP method namespace)

                 
            • Paul Matthews

              Paul Matthews - 2000-06-13

              (Found "coke", could not find "beer")

              There are currently only 5 machines on the internet who can 'see' my machine at home. And only 5 machines on the internet my machine can 'see'. This is not an ISP problem. This is by design. Everything is locked down tight as a drum. I can talk 8080 to one machine and recieve its responces, but I cannot talk 8080 to any other machine. To the second machine I can talk NNTP and see its responses, but I cannot talk any other protocols to it. The same applies to the other machines but for SMTP, SSH and FTP.

              When they finally run fiber down my street I will be putting a web server on my machine and allowing anyone to talk to it on port 80. Not a problem. I know how to secure apache.

              Let's say OpenQueue is a success. And there is an Slashdot News Ticker Message Queue to which I am subscribed. I expect the new ticker service to call me up (the datacasting bit) and send me the new headlines. (This might not work in reality. (It's just an example)

              And being a paranoid bastard, I would want to put a rule that said the slashdot, and only slashdot, can talk OpenQueue to my machine.

              And although that rule could be done inside apache. It possibly to late to filter, and MOST IMPORTANTLY, the local message queue would be running as the apache user.

              The duties of the web server and the duties of the queue server are differnet. The files required by the web server and the files required queue server are different.

              And that is the crux of the matter. The group of people you want to talk to your webserver is differnet from the group you want to talk to your local message queue server.

              Now, back to cobol. This one's small only 211k of source.

              --
              PM

               
              • Greg Stein

                Greg Stein - 2000-06-13

                I mentioned this before, but you may have missed it:

                Put the OQ server on a different port. Sure, that server happens to be Apache (woo! don't have to write the bugger yourself!), but nothing says it must live on 80.

                Now, let's take option (3) from the earlier list: the client listens at a port and server(s) connect to it to deliver events. This appears to be your thinking for your Slashdot receiver. No problem so far.

                What do you tell Slashdot to send its notifications to? I would say a URL:

                http://paranoid.bastard:999/oq/slashdot/incoming-q/

                Your Apache server runs two virtual hosts. One on port 80 which allows GET, POST, OPTIONS. The other vhost is on port 999 only allowing OQ-xxx.

                Your firewall is configured according to your strict policy. No problems!

                 
                • Paul Matthews

                  Paul Matthews - 2000-06-13

                  I must come from the older generation, when listening on a port and fork()ing was a trivial thing to do, even in C. In Java it's even nicer. One thread that listens and runs off multiple threads as connects appear.

                  The below are not trolls. They are serious questions.

                  If open_queue is implemented as a CGI or a mod, what benifits, other than the above, does apache give us? More importantly what disadvanges does have?

                  Instead of apache, why not use servletts without a web server (if I'm reading javasoft correctly you can run a dedicated servlett server WITHOUT the web stuff at all)?

                  Why not RMI or EJB? Potentially a much neater solution. RMI has call backs....

                  --
                  PM

                   
                  • Greg Stein

                    Greg Stein - 2000-06-13

                    I don't see it as a troll, but a valid set of questions :-)

                    Listen/fork is easy. Sure. But do it portably. Take advantage of threads when you have them. Enable/disable the Nagle algorithm as appropriate. Use TCP_CORK on Linux. Use Win32 native sockets on that platform. Use the appropriate/fastest mutex around your accept() call to prevent the "thundering herd"; oh, unless you're on Linux or Solaris where you can set "wake one" semantics. Fork a CGI or use a builtin module operating in the server process? When you fork, will the OS duplicate then kill all the threads, so we need a separate daemon for CGIs, or use fork1() or is it automatically fixed on that platform? etc...

                    It gets a bit more difficult when you want something broadly portable and efficient :-)

                    And yes: threading makes things easier, and Java does have certain advantages over C (but some disadvantages, too).

                    1) what does Apache provide? see the big paragraph above :-) ... Apache has solved many of the problems for portability and efficiency. It has wrestled with the varying semantics across operating systems. It has figured out how to properly buffer output, but also to *not* buffer when important. It knows networking. etc.

                    2) There is no need for Apache. *That* is another reason why I'm suggesting an HTTP-based protocol. Don't want Apache? Fine. Use the ACME Web Server. Use Tomcat and a servlet. Any server that speaks HTTP is going to give you a huge leg up over other solutions.

                    Let's say that you went with something other than HTTP. The OpenQueue project develops a reference server in C. You don't like that and want it in Java. Well, you have to go and implement a lot of the message parsing, handshaking, authentication, proxy support, etc all over again.

                    I simply can't stress enough how much the existing code base can help. There is a LOT of stuff out there that knows HTTP. And do you really want to redevelop authentication, encryption, proxy support, message digests, flexible/extensible request/response definitions, etc?

                    3) RMI and EJB are specific to Java. I am unaware of a Python or Perl implementation of those. I'm guessing there is a C version, but I don't know about it. By using RMI and EJB, you lock the entire system into a Java solution. By using HTTP, I can use Python's httplib.py and whack up a simple client in 10 minutes. A CGI for the server side would take maybe 30 (because I'm assuming it is a bit more complicated :-). Need a listener on the client? Python's BaseHTTPServer should do the trick.
                    [ Python just used as an example here; substitute your favorite language... I'm sure it has HTTP capabilities ]

                    Oh, you say, "most people can't do that in 10 minutes... you say that because you know Python's httplib very well, and you know how HTTP works."

                    ... wait for it ...

                    Well, YAH. My point exactly. I know HTTP. You know HTTP. We don't have to learn new protocols to work with OpenQueue. Sure, a few new methods, some headers, and possibly a special request body or two. But all of that fits into a well-known pattern of operation.

                    By using HTTP, you leverage code, but you *also* leverage what people know about the subject.

                    ---
                    (*) if RMI/EJB can use CORBA on the wire, then maybe you can implement an OQ client/server in other languages, but my understanding was that you couldn't do this.

                     
                    • Paul Matthews

                      Paul Matthews - 2000-06-14

                      I no more want to see OQ implemented over HTTP, than you want to see OQ implemented over RMI.

                      (BTW: Reverse engineering RMI is on my todo list, just not near the top at the moment. RMI from C.)

                      Most of the reasons you suggest for using Apache apply equaly well to inetd. I recommend strongly hooking in behind indetd. One simple program, read from stdin, write to stdout. No problems. indetd takes care of forking. An order of magnitude faster than CGI.

                       
                      • Greg Stein

                        Greg Stein - 2000-06-16

                        1a) why is OQ over HTTP bad? I am unclear on your (remaining) issues here. I had thought that I addressed most of them.

                        1b) OQ over RMI isn't good. The latter is proprietary. "OpenQueue" seems to want to avoid that :-)

                        2) I have two points here:
                        a) Apache vs inetd. Implementation is orthogonal to my main point: HTTP is a great basis for the protocol. There would be a ton of reinvention to create a new, custom OQ protocol. And that reinvention would not make use of existing tools, knowledge, or infrastructure.

                        b) if you *do* want to talk implementation: inetd doesn't come close to Apache. inetd always forks. Apache doesn't have to (nobody said CGI is required; write the thing in PHP, mod_perl, or mod_python)

                         
        • Paul Matthews

          Paul Matthews - 2000-06-13

          ...still ranting...

          Why don't we add SENDMAIL and GETMAIL commands to HTTP and then we can get our email over it? How how about IHAVE commands so be can read our news? Why dont we have a HTTP TELNET command and then we can use https to telnet into remote machines via the web server? Who needs secure shell. I know, how about HTTP PGSQL so we can talk to postgress via HTTP.

          ..rant finished...

          None of the above are in the hypertext problem domain. Neither is message queuing. Just because something can be done, doesn't mean is should be done.

          --
          Paul Mattews

           
          • Greg Stein

            Greg Stein - 2000-06-13

            I'm posting here to provide an alternative point of view. Another way of looking at the problem. Possibly a way to simplify the overall design and development cost.

            I'm not here to argue.

            thx,
            -g

             
            • Anonymous

              Anonymous - 2000-07-01

              Jumping in to the fray ...

              I like gstein's idea of using HTTP, but acknowlege that maybe it is not where you guys want to go.  Anyways here just a couple of comments.

              1.) The advantages of using HTTP as the messaging protocol are great, i.e. it is a very well known existing standard that is already well supported by several open source web servers.  So you wouldn't have to write your own server.  Understandably you don't think writing a robust, portable and efficient web server is a problem, if so, then fine I guess that is not an issue for you.

              2.) I don't understand all this argument about how adding proprietary extensions.  Why would you need to add proprietary extensions?  Why couldn't everything be wrapped inside the message body of the request using the POST method.  To the best of my knowledge, there is no restriction (either than it should be 7-bit ASCII obviously) on what is contained inside the request message body.  Then writing you a message queue server could be as simple as writing a CGI script, Apache module, or what have you.

              3.) The main disadvantage of using HTTP, that I see, is that HTTP is not set up to do notifications.  You could use polling, you could attempt to maintain the connection ... neither of these sounds very satisfactory.  HTTP is meant to be connectionless and stateless.  That does not suit notifications very well.

              4.) So what if we say "Ya, HTTP would buy us a lot, but getting it to work would require kludging".  So I have a suggestion that might be agreeable to both nudge and gstein.  What about using BXXP (Blocks eXtensible eXchange Protocol).  It's an awesome protocol and it is supposed to be given royal assent by the IETF to eventually take over from HTTP.  What about looking at using BXXP as your protocol.  If it does take over (which looks quite possible), then you will have all the advantages of using HTTP, without any of the disadvantages (except of course the 7-bit ASCII requirement, which it seems we are doomed to keep for all eternity).  Take a look at it.  It is well thought otu and very cool.  Even if it does not take over, it is still a well laid out and protocol that (I think) would be idea for using as a message queue server protocol.

              Check it out if you havn't already:

              News article:
              http://www.nwfusion.com/news/2000/0626bxxp.html

              Spec:
              http://search.ietf.org/internet-drafts/draft-mrose-blocks-protocol-04.txt

              Cheers.

               
              • Greg Stein

                Greg Stein - 2000-07-01

                re: 2) Please see http://xent.ics.uci.edu/FoRK-archive/feb98/0238.html

                re: 4) BXXP has been recently hyped by a few articles, but it stands no chance of replacing HTTP within the next ten years. There are 100 million clients (more?!), and 17+ million servers. Inertia alone will hold it back. And the IETF has said no such thing about BXXP. They assist in forming standards, but do not make qualified judgements like that. Hell, a Working Group for BXXP has not even been chartered yet.

                BXXP is a nice *framework* for creating network protocols. In that sense, it could also be a good base for a new OQ protocol. However, it is very immature at this point. There is no standard, there are no tools, there are no people with experience with it, etc. The protocol doesn't have considerations for proxies, caches, variant client/server capabilities, etc. There is still much to work out :-)

                Don't get me wrong: I *do* think that BXXP *could* probably be a better basis for a protocol. But not for about 18 months. It is really a wait-and-see thing.

                 
                • Anonymous

                  Anonymous - 2000-07-01

                  Just out of curousity, how long did it take for HTTP/1.1 to replace HTTP/1.0?  I don't think it was 10 years.  Maybe I'm comparing apples and oranges here, but if Microsoft and Netscape decided to support BXXP, it wouldn't take very long to replace HTTP, (of course I admit this is a pretty big if ;-)

                  Let's put it this way.
                  - A message queue server needs a protocol.
                  - An open and standard protocol is a good thing, (I think we are all agreed on that right).
                  - It is better to use a well defined, well known protocol than to make up a new one from scratch, (not to mention, making one up from scratch requires work ;-)
                  - There is a lot of hype behind BXXP and hype for it is a good thing because it means that you might end up with a self-fulfilling prohecy.  Look at the hype that was around Java in the beginning, or the hype that is around open source now, or the hype around WAP, or the hype around Linux, etc.  Hype gets people moving, using it, etc. With this in mind, people are already starting to use BXXP and are investing money in it (look at www.sourcexchange.com for at least one company that is paying for people to do stuff with BXXP already). BXXP has a good chance of becoming a standard protocol (even if it does not replace HTTP) and is a well designed, (although incomplete, yes, but this will come quickly) protocol.
                  - From a technical point of view, BXXP is better protocol for a message queue than HTTP is, (a LOT better).

                  Still, a message queue server using HTTP would be easy to do and would be immediately useful and would make it through a lot of firewalls, (most I would think).  Talkign to the message queue server would also be a piece of cake (could use JavaScript within a web page to send and receive messages even).  So don't get me wrong, I think that using HTTP for a message queue protocol is a good idea, but I guess that is another project.

                   
                  • Greg Stein

                    Greg Stein - 2000-07-02

                    hehe... trick question. Answer: HTTP/1.1 has *not* replaced HTTP/1.0. And even in cases where clients say they do HTTP/1.1, they really don't. There are also a lot of proxies, firewalls, client toolkits, libraries, etc, before we see broad HTTP/1.1 support. For example, Python didn't support HTTP/1.1 until earlier this week when I checked my module into the standard distro. (yes, you could always get the module from my pages, but only recently was it added to the standard release)

                    Re: MSFT/NSCP picking up BXXP. Don't forget that Apache would need to do the same. But no fear on that front: there are a number of projects at SourceExchange to do just that.

                    In any case... your points are quite valid. HTTP is good at this point. BXXP could be even better, later.

                     
                    • Anonymous

                      Anonymous - 2000-07-02

                      And I know a couple of people still using Apple IIs, so I guess you could say those havn't been replaced either ;-)

                      If your definition of 'replace' means there is absolutely no trace of it anywhere, then we could safely say (using this definition) that nothing is every replaced.  Fine. I think it's not a very useful definition, but whatever.  I would say replacement occurs when you go over 50%, (which for HTTP/1.1 happened a long time ago).  Different definitions I guess.

                      So back to using HTTP as the message queue protocol.  How would a person solve notifications?  I wonder if the message queue server could just keep the connection open.  I know you mentioned this earlier, but I'm just trying to work out the details.  One scenario I can think of is that more than one connection is used between the client and server.  One connection is used to receive asynchronous messages and the other connection is used to send messages and to send receipts to messages sent to it asynchounous on the other connection.

                      - The client opens a connection on one thread and sends a request asking for a "notification" connection to be established, identifying who it is and all that. The server never finishes the response, (say it uses chunking to send back the data in chunks) and keeps the connection open indefinitely.  This channel is used to send messages to the client.

                      - Whenever the client needs to send a message it would open another connection on seperate thread and send the message.  This connection would not have to remain open, (although it could)

                      - Whenever someone sends a message to the client, it is sent through the continously open connection where the notification thread has been blocked waiting for data and now gets the data and acts upon it.  It sends back the a "message received" message on the other connection (and of course get back an "OK" reply on that connection).

                      I wonder if this would fool firewalls, or would firewalls have a build-in time-out in them?  Probably it would work.  The server might have to send a "keep open" message back periodicaly just so that there was some activity so that the connection IS kept open.

                      This might work, but is bit clumsy.  I wonder if someone could base everything on a much more elegent protocol (like say BXXP ;-) and then 'wrap' this protocol inside HTTP like described above?  I like that idea better.  I think that is what a lot of other message queue servers do actually, (except using their own proprietary protocol) or at least something along that line.  Many queue servers have some sort of HTTP wrapping technique to allow them to tunnel through firewalls.  Don't know how it works exactly, but probably it is 'something' along these lines.  Either that or they use polling, which will obviously work, but is just kind of grossly inefficient.

                       

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.