[Geneticd-devel] Queuing commands to engine
Status: Alpha
Brought to you by:
jonnymind
|
From: Jonny M. <jon...@ni...> - 2002-01-14 22:27:43
|
Pre Scriptum. I am writing this while reading incoming mail from Steeve.
The last letter of Steeve and the new black-box approach to engine
programming gave me an Idea worthy to be considered.
At this moment, the engine model is sync with the connection: an order is
given, and the connection is put in a wait state until the order is evaded.
When I thought this model, I had in mind a user sitting on it's keyboard,
waiting for the daemon to complete it's operations. Also, in the beginning I
did not have a "block" command, that blocked the engine and restored the
previous turn's status, so I did not recorded it.
Now, it is possible to queue all the commands that require to have exclusive
access to a coherent population:
- All the querying commands will operate on the saved state of the
population, while turn is running.
- All the updating command will be queued; as soon as the turn finishes, the
environment will be locked by the engine; then each updating command will be
executed on the newly created population. Finally, the resulting population
will be stored into the saved state, and a new turn will begin.
This bring us to the conclusion that there will be no need to lock the
environment, since it will be managed only by one thread. Let's resume it
with this scheme:
<command 1> Engine Queue
| -------------------
V Command 1
<command2> --> [Engine] Command 2
| |
V V
[Env] * (running) ---> [Env] * (turn finished)
|
V
<query command> ---> [ENV saved state] ---> Query result.
Now, this could lead to a potential problem: the effect of changing the
engine won't be displayed until the end of the turn, but a client would not
be able to know when this happens, unless having a "ticker" on the engine that
says when the turn is over. This can be done with a sort of "short" feed,
that lead us to 2 other consideration:
- we need more than one format of feeds
- we need more than one feed per engine (one per connection should be fine)
But these are all feasible issues on my opinion, and with a small effort.
There is another problem. At this moment, agents can be referenced both by
their symbolic name and their position in the population. After the turn has
completed, the position of an agent could change. Commands referencing to a
certain agents and modifying the engine (i.e. dage, delete agent) can "miss
their target", and hit another agent. We could handle that situations in
three ways:
1) this kind of commands can be issued only when the engine is not
running; this make some sort of sense to me...
2) agents can be identified only with their symbolical name. This would
also simplify remote parallel processing operations (imagine that we have
too coordinate an array that is changing, and that is located on several
different async. machines... this is a programming nightmare). With the name,
all it's simpler: i tell the engine to kill THAT agent, and this request can
be "routed" through the cluster until we reach the server holding that agent.
Then, if the agent is still alive, it will be killed; if not, nothing bad can
happen. This have only one significant drawback: clients willing to iterate
through the population must always have a complete list of agent names.
Imagine to have something as 10.000 agents in our cluster... this is also a
programming nightmare.
3) we can have both. Commands referring to agent names can be issued in
any moment, while commands referring to an agent position must be issued only
when the whole distributed population is sync stopped.
The third solution seems the most reasonable to me. It makes some sense that
when you want to iterate through population, (i.e. when a client program
wants to store locally the whole population) the population must stay "fixed"
as it is. The problem is not so pressing when you want to deal exactly with
just THAT agent; the agent could be removed in the meanwhile, but this is not
important: the client will be told that the agent does not exists anymore.
The only problem in this solution is that, while you iterate through a
stopped population, you don't want someone else to start this population
again.
This can be obtained in this way: when referring to agents with their ordinal
number, if the population is not local (is running in a parallel environment)
you need to obtain a stop-lock, that is: the population must be stopped on
all it's points, and all it's slave servers must be told not to accept orders
to start it (or modify it) again; also, slaves must have replied that they
stopped and locked the population (a kind of "roger" reply). The client will
remove the stop-lock when it finished it's operations. The only connection
allowed to remove the lock will be the one that originates it, or another one
opened by the same user (useful in case the first connection fails, or the
client mess up something.)
Thats all for now. I am having an idea on how queuing the commands in the
engines, but I'll use another mail to describe it.
Wow, if you have reached this line, you're really great genetic programmers!
Giancarlo
|