[Disclaimer: I wrote this in two sittings, so apologies for typos/discrepancies.]

You're right in that one process for purging expired sessions is enough - no need for extra collisions :)
Using a supervisor would become a single point of failure... and a bottleneck.
Here's an alteration to your suggestion, then. Note that I haven't used mnesia in a distributed environment yet (so I rely on your experience in this matter), and am assuming that gen_leader (from jungerl) works well. If it doesn't, then implementing something close or "good enough" (where we might get several concurrent active leaders occasionally) should be fine.

First of all, to simplify matters I am going to also make the following assumptions:
1) We keep all session tables replicated among the yaws servers. I.e, yaws server = session server.
2) All session state/vars etc. are kept in ram_copies. Persistent state might have gotchas I'm not aware of. In short, our cluster runs "forever", since it's so very very robust. :)
3) Our session IDs (SID) are passed via the URL. No cookies.
4) Generated SID are never repeated, and the SID space is unique per generating node (use node name + date + some randomized something etc.). If you know a bit of magic to do this, please share :)

Now, to complicate matters once more:
1) I'm assuming a semantics where each connection process will use an api for session access where the entire session data/object is read into the process (acquisition) then atomically written back to store when it is done. Looking at some servlet documentation this seems like a valid approach (completely locking the session data also introduces other complications, essentially distributed lock management which might be complex in this specific case since session reaping process(es) would also access the same structures). Of course, the fact that in the EJB universe practically EVERYTHING is open to interpretation, but I made my research more rigorous by reading through some independent magazine ARTICLES :)
2) A process group (via gen_leader) is created for purging/reaping expired sessions.
Each member of this group will be a registered per-node process, whose lifetime is longer than that of yaws. This means that whenever a reaper dies, its siblings can assume that the entire node has gone down.
3) Loosely, we define the following data structures (some refinement might be in order):

%%=============================================================================
%% Replicated tables.
%%=============================================================================

%%% session_lifetime (set): Also used for quick session lookup.
%%  K: sid().
%%  V: time() | list(reaper_pid() - one per attached connection).

%%% reaper_session (bag):
%%  K: reaper_pid().
%%  V: sid().

%%% expiration_times (ordered_set):
%%  K: {time(),sid()}.

%%% session_data (set):
%%  K: sid().
%%  V: proplist() of session variables.

%%=============================================================================
%% Reaper internal.
%%=============================================================================

%%% connections (set):
%%  K: conn_pid().
%%  V: list(sid()).

%%=============================================================================
%% Connection internal.
%%=============================================================================

%%% PD entries:
%%  sessions: list(sid()).
%%  {session, sid()}: dict() or proplist() containing the session variables.


Several things to note here:
- Each connection process keeps track of attached sessions via its process dictionary.
- Each reaper knows which processes are attached to which sessions in its node.
- A session is in use when its session_lifetime table contains a pid list and not a time() value.
- Each reaper is implicitly aware (via gen_leader and its configuration) of the set of active and defunct reapers in the clusters.

Outline of operational logic:
- When a connection process wishes to acquire/attach a session, it first sends a notification to its local reaper, then performs a transaction which adds itself to the appropriate tables and retrieves the session data (writing it to its PD). Upon receipt of this message, the reaper starts monitoring the connection process. Note that the reaper may have the process written down for several sessions.
- Release/writeback of a session is done in reverse. When the last connection process detaches from the session data and the session should live on, it sets the session's expiry time, otherwise it erases the session.
- If a connection process exits without properly detaching from its sessions, the reaper will be notified via a monitor message, and can do the detachment bit itself (or spawn a process to do it, so it maintains good response).
- If a reaper process goes down, the lead reaper (the Grim one, with the scythe) will go over all sessions marked by that reaper and perform the appropriate detachment operations.
- To take care of the case of a double failure or worse, where both a non-leader and a leader go down, the lead reaper can perform periodic cleanup as it knows which reapers are active and which are not.

The idea was to let the connection processes do their own work as much as possible and avoid bottlenecks such as going through a global or even per-node process for accessing/manipulating the sessions. Reaper processes must access mnesia as little as possible to achieve that, I think.

Sorry this got a little long... but it seems like it's a relatively nontrivial problem in any case.

Thoughts... if I haven't bored you too much? :)


On 7/19/06, Yariv Sadan <yarivvv@gmail.com> wrote:
Alex, I think you're thinking on the right track. Yaws currently
automatically expires sessions that are stored in its internal ets
table. The code is rather simple, in yaws_session_server.erl. However,
this mechanism is too simplistic for replicated, distributed session
data, because you don't want all Yaws instances to be responsible for
expiring sessions. One instance is enough.

A better approach, similar to what you suggested, would be to create a
"super" Yaws supervisor that supervises the whole Yaws farm. This
superviser will also be responsible for creating the Mnesia session
store and for purging expired sessions.

This supervisor could also provide a simple API for Yaws to updating
and querying session state, or it could be done directly with Mnesia
functions.

Session state could be stored on the same nodes that run Yaws or on
different nodes altogether. This can be configured depending on the
application's needs (similar to a Mnesia schema).

What do you think?

Yariv




>
>
> This is the option that I'm currently exploring, as it seems to be more
> generic in nature.
> However, there is the problem of expired sessions, specifically the case
> where a user started a session but for some reason or other never got around
> to "closing" it.
> I think a per-node Session Manager process would be necessary for this,
> where the connection process (at the start of out()) will notify it that it
> is using the session, and would manage locking of the session object. To
> complete the picture, session scrubbing processes should activate
> periodically and remove sessions that are expired. Distributing this might
> be done by another table (ordered set) whose key is expiration time.
>
> Thoughts?
>
>
>
> -------------------------------------------------------------------------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys -- and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
>
> _______________________________________________
> Erlyaws-list mailing list
> Erlyaws-list@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/erlyaws-list
>
>
>