From: Peter C. <Pet...@me...> - 2007-01-31 12:51:05
|
> From: Selwyn Lloyd > Is there a protocol or spec for feeds which once read are eaten so to=20 > speak... i.e. you eat them once only go back and their gone... If you consider the Web constraints of: - Consumers are anonymous, may be connected erratically and may change = their address or may only be able to connect through a proxy (consider = dial-up AOL subscribers or road warriors); - Servers may be clustered, and anything that decreases cluster = performance is a Bad Thing; - Many users have webmail with no or limited access except via HTTP; - Bandwidth is cheap then the reason for the current architecture becomes clear. You can't push to the consumer because you don't know what address = they're on, or even whether they're online. You don't want to maintain = a queue on the server (unless it's at the consumer's expense) because it = requires synchronisation within a cluster plus state storage - = expensive. You can't use the only (fairly) reliable store-and-forward = protocol whose queue cost is borne by the consumer because there's a = good chance any custom client can't read the queue (and the standard = clients like Hotmail will suck the supposedly-custom messages out = anyway). Oh, and you'd need custom software at the server to push the = content. By contrast, publishing a file to all the front-end Web servers in a = cluster gives no synchronisation problems at the server, requires no = storage at the server, requires no custom software at the server (you = can author a RSS feed in ed if you're feeling masochistic), and any = consumer can access it from any location at any time via HTTP, which is = proving to be almost the only universally-available access mechanism. = Yes, polling is required, but bandwidth is cheap and good interop = outweighs bandwidth (almost) every time - otherwise verbose protocols = like HTML and XML wouldn't be standard. Just my =A30.02... - Peter |