RE: [Planetlab-arch] Proposed Changes for Dynamic Slice API
Brought to you by:
alklinga
|
From: Steve M. <sm...@CS...> - 2004-01-27 22:02:50
|
On Tue, 27 Jan 2004, Timothy Roscoe wrote: > > Quick response: your answer doesn't address scalability at all (which > is about how the system grows), though it does point out that the > current solution is adequate at the moment. my answer addressed two aspects of scalability: 1) relative scalability - i claim that providing a (compressed) static file is in general more scalable than supporting a wide range of database queries, particularly if we can use a CDN to distribute that file. even though the database is optimised for supporting a range of queries there is a fairly significant overhead in redirecting every XML-RPC request from the HTTP(S) server to the PHP (or whatever) entity that performs the db query. 2) increase in number of users - for a fixed number of slices and nodes, a static file can be served repeatedly to a large number of users without increasing the load on our server by using some CDN. > Thought experiment: suppose we increase the number of slices by a > factor of 10, the number nodes by a factor of 10, and the number of > users by a factor of 100. Suppose that the file is downloaded > several times an hour by a significant fraction of those users > (because their automated status tools do it). What is the dollar > cost in bandwidth to Princeton or the Consortium for this traffic? i don't know how Princeton or the Consortium gets billed for traffic to/from, say, Codeen proxies, so i can't answer that question. > Another question: does XMLRPC have a gzippable transport, or are we > defining a new RPC protocol by compressing things? i wasn't thinking of this as RPC, just file download. if you're asking a more general question then i defer to david anderson's response. steve |