From: Torbjorn T. <to...@to...> - 2008-05-26 13:31:15
|
Lev Walkin wrote: > Torbjorn Tornkvist wrote: >> Lev Walkin wrote: >>> Recently I had to create some sort of CGI load balancing. The request >>> would come to some YAWS instance, validated against certain criteria, >>> and then go to an appropriate node where _that_ YAWS instance would >>> execute a CGI. >>> >>> I tried doing it by creating a .yaws file on one node invoking >>> rpc:call() to forward the incoming Arg#arg to another node's out() >>> routine, which does CGI execution. >> Hm...that's a little bit boring. >> >> It seems however that in: A#arg.state >> you'll find: {cgistate, Worker} >> where Worker just is a Pid. >> >> So when you do your dispatch in your master node you could look >> at this state entry. If it countains this cgistate-tuple, dispatch >> to: node(Worker) > > Torbjorn, > > am I understanding it right that "when you do your dispatch" is > something pertaining to YAWS code, not the .yaws script's one? > > The problem is that I need the code to be executed on the remote > node, so dispatching it back to node(Worker) while being on my > master node does not make any sense to me. Hm...I just assumed that the Worker concept meant that successive request should go to the same node (the Worker node). But perhaps there are other dependencies that put constraints on the executing context. I don't know that much about this code. Perhaps Klacke can propose a solution or a fix for it. --Tobbe > >> Totally untested though... >> >> Cheers, Tobbe >> >>> I thought it would not work, but I wondered why. No surprise, it didn't >>> work this way. Some "yaws workers" with particular indexes >>> weren't found on the remote node, so the whole request would collapse. >>> >>> I ended up creating a revproxy=[] configuration to "attach" a remote >>> node to the specific HTTP prefix, and then generate an internal >>> redirect using {page, "/prefix/" ++ Arg#arg.querydata}. >>> >>> However, this is ugly. I would rather utilize some sort of Erlang'ish >>> trick to ask a remote node to do work for me, rather than using >>> a revproxy thing which doesn't even tell me when such forwarding >>> terminates abnormally. I could get this with rpc:call() if not >>> for that workers' problem. >>> >>> The question is, is there a particular reason why Args couldn't >>> be forwarded to another system? Is there a compelling reason to >>> keep them this way? Wouldn't it be better if yaws instances >>> weren't dependent on some indexes into worker pool (as I understand >>> it) and instead used some intra-node pids? >>> >> >> ------------------------------------------------------------------------- >> This SF.net email is sponsored by: Microsoft >> Defy all challenges. Microsoft(R) Visual Studio 2008. >> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ >> _______________________________________________ >> Erlyaws-list mailing list >> Erl...@li... >> https://lists.sourceforge.net/lists/listinfo/erlyaws-list > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ |