On Wed, 2001-11-28 at 18:10, Jack Moffitt wrote:
> Was there a reason this was a direct reply :)
Nope, just hit the wrong key :) And then I still forget to put the list
> > > It's simple, but not complete. Bookmarks would be for specific servers,
> > > not the main machine (unless you allowed them to just bookmark the main
> > > page).
> > Well, the bookmarks are pretty harmless -- but it would be a problem
> > when there was a link to a specific machine from some
> > high-traffic-generating site.
> > One way you could deal with this was capturing the instance when the
> > HTTP_REFERER wasn't local, and then randomly redirecting them. You'd
> > still have a lot of incoming traffic to a single server, but later
> > connections would be balanced. (Build this into the adapter and it
> > shouldn't be a problem at all)
> One of the major reasons for splitting over several boxes is to allow
> one or several boxes to fail without losing services. Typically in a
> round-robin dns fashion (or a slightly smarter one like you described),
> you won't get this benefit. A book mark to a dead server is useless. A
> bookmark to the main page would still work.
> In my opinion there should be an easy way to do both.
Well, if you are already doing fancy DNS stuff, you could have the DNS
handle failures by redirecting everything from the failed server to a
All of these systems generally have a critical server to do the
redirection, so there's always that point of failure.
> > > Also, if you left and came back, you might get to a different machine,
> > > which means for some applications the number of times you have to login
> > > would increase.
> > This doesn't seem like it should happen. The only way would be if you
> > left the site and came back through the front page. Most people
> > wouldn't be surprised if they had to relogin in this case -- it might
> > even be best that they do, since their session really has ended, even if
> > they haven't closed their browser.
> It depends. I suppose you could also solve this at the application
> level by using cookies and restarting a session when you come back.
> Many sites do this, like Amazon, etc. I hate logging into sourceforge
> every time, so being able to persist state like that is important, but I
> you may have a point that the 'session' level might not be appropriate.
> > > There is an easy way (i think) to complete this easy solution, and that
> > > is to make the webware adaptor smart. Enable it to be the front end to
> > > a pool of servers, and use a cookie or something to store which server
> > > that user should be using, and direct requests to it.
> > That would work... but it depends on what the memory and CPU usage of
> > the adapter. Anyone know what kind of performance mod_webkit has in
> > that way? It's fast, but I don't know how much memory it needs for a
> > connection.
> I don't think it takes very much. Memory isn't really even an issue in
> these days of 1GB of ram costing about $120. Of course, the more memory
> at the adapter level you want to sacrifice, the more complex the job it
> can do I guess, like caching :)
> The JSP servers tend to have solved or at least are attempting to solve
> these problems in a number of ways, and I think it would be a good idea
> to find out what they are doing :)
I remember reading a paper from someone involved in etoys.com, and they
used a lot of open source stuff to do both load balancing and caching.
Most of it was with various Apache modules, and seemed fairly easy to
translate to any other dynamic source (I think they were using PHP).
They were talking about storing sessions on a shared server accessed
over NFS, but that seemed kind of silly to me -- now you have this
beefy, critical file server ready to be a bottleneck. Avoiding that was
why multiple domain names just seems a lot better. Then all you need is
the beefy, critical database server.