From: Travis A. <ia...@gm...> - 2006-06-20 22:37:25
|
thanks for all the replies i am having a problem with migrating processes manually right now. i starte= d nano and tried to migrate it. the stay file is empty. i echoed " 192.168.0.205"(my other nodes ip) into the where file but the where file still says "home". Is this happening to me because the two different nodes where compiled with different cpu types.(Celeron[coppermine] and Celeron[pre-coppermine])?? On 6/20/06, Niels de Vos <nix...@us...> wrote: > > Hi Travis and others, > > for some months I've started with 'om-nixus'. Unfortunately this became > too complex and unmaintainable... As replacement I've simplified my > tries and wrote some other python code. It is a complete different > approach compared with the official userspace-tools from David Santo > Orcero and Jose Mar=EDa G=F3mez - not related at all. > > My daemons are just some fun with Python while trying to develop > something useful the same time. I've always liked openMosix and hope to > use/help it in the future. > > For now I'm unable to test anything on a real cluster. Development takes > place on my 5 year old laptop which is not that capable of running > virtual machines. Ofcourse running an om-kernel, which is not a problem. > It is difficult to test the daemons, my random-migration lets some > processes crash. On the other hand, it also not the latest om-kernel > which should have most important bugfixes. > > Anyway I hope you (or someone else) could test my efford and give some > feedback... Again, they're not related to the official openMosix > project. > > If you have any questions, don't hesitate to mail me. I rarely have time > to implement more features, but replies on mail should be possible. > > Hope it helps you a bit, thanks, > Niels de Vos > > [ Sources in SVN, see bottom of this mail ] > > --- from the README --- > > Design: > One master node/process which controls process migration to other > nodes. 'omd' is the main daemon, 'omd-dumb' is run on the nodes for > sending StatusMsg and accepting/denying process migration. > > 'omd' asks a 'omd-dumb' to accept a process migration. The > MigrationRequest contains /proc/<pid>/stat in a dict(). This > MigrationRequest gets checked by 'omd-dumb' and a MigrationReply is > sent back to 'omd'. When the MigrationReply.accept=3DTrue, 'omd' > starts migrating the process. > > When a new node (omd-dumb) joins the cluster, it sends StatusMsg-s. > The first StatusMsg is replied (by omd) with an InfoRequest. A > InfoReply is sent by the new node, which contains /proc/cpuinfo and > /proc/meminfo. The received information is stored by omd and can be > used by the Scheduler. > > Parameters for omd and omd-dumb: > -v, --verbose: enable complete logging (logging.DEBUG) > -l, --listen=3D: listen on this IP-address > > Status (as of $LastChangedDate: 2006-06-17 21:41:19 +0200 (Sat, 17 Jun > 2006) $): > o omd-dumb sends StatusMsg, omd adds/overwrites them in a dict > o negotiation of process migration implemented > o Scheduler with Simple Random Algorithm (rsched.py) > o Scheduler contains a blacklist for stayreasons > > > Todo (in no particular order): > o libomd.omCheck() with version/compatibility checking > o format loggers nicer (show at least the name of the logger) > o implement a smarter scheduler(s) > o command option for scheduler selection > o combine omd.py and omd-dumb.py into one process? Multi-Master > o use StatusMsg.timestamp to detect dead nodes - a new thread? > o check IPv6 support (depends main/only on Python?) > > > Implemented values for logging.getLogger(): > o migration > o scheduler > o omd > o libomd > o msg > > > > --- Get the sources --- > svn co > http://nixbox.nixpanic.net/repos/dev/om-userspace/trunk/simplified > > > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.3 (GNU/Linux) > > iD8DBQBEl+hLUHyL+FAJ7UYRAkiSAJ0bpSvMh3fZ+g/V9A3VHZ1slvwY9QCg+UtU > NsUnuCIjIm845e/hSzD5UHc=3D > =3DhJFG > -----END PGP SIGNATURE----- > > > --=20 From: Travis Athougies 2 + 2 =3D 4 |