Thread: [SSI] Help
Brought to you by:
brucewalker,
rogertsang
From: Qamer F. <qam...@ms...> - 2002-12-02 09:04:13
|
Hi, I am student of masters of computer science and doing my final project on process migration in Linux,So, i want some help & suggestions on process migration, that how can i go further to implement the process migration. So, please tell me how to migrate the process on remote side and how to handle the process after transfer to the other machine. So, please help me Thanx in advance My email is qam...@ya... _________________________________________________________________ STOP MORE SPAM with the new MSN 8 and get 2 months FREE* http://join.msn.com/?page=features/junkmail |
From: Qamer F. <qam...@ms...> - 2002-12-02 10:50:27
|
Hi, I am working on process migration in linux, so when the proces migrates to the remote machine then how can i send/recieve the ttysI/O to/from the home node. Thanx in advance my email is qam...@ms... Take Care Regards Qamer Farooq _________________________________________________________________ The new MSN 8: smart spam protection and 2 months FREE* http://join.msn.com/?page=features/junkmail |
From: Aneesh K. K.V <ane...@di...> - 2002-12-02 11:22:58
|
On Mon, 2002-12-02 at 16:20, Qamer Farooq wrote: > Hi, > > I am working on process migration in linux, so when the proces migrates to > the remote machine then how can i send/recieve the ttysI/O to/from the home > node. > That's done by making /dev/tty clusterwide. That means the names are unique clusterwide and you can write to these device from any node in the cluster. (drivers/char/tty* ) > Thanx in advance > my email is qam...@ms... > > Take Care > Regards > Qamer Farooq > > > -aneesh |
From: Brian J. W. <Bri...@hp...> - 2002-12-02 19:43:20
|
> > I am working on process migration in linux, so when the proces migrates to > > the remote machine then how can i send/recieve the ttysI/O to/from the home > > node. If you have a tty open before migrating, you will have the same tty open after you migrate. Our process migration model is almost completely transparent to a process, so that unmodified apps can be migrated. Part of making it transparent is making sure the process still has its complete set of file descriptors (fd), and that each fd connects to the same file, device, pipe, or socket that it connected to on the original node. For devices, pipes, and sockets, this is done by setting up a special client file structure on the new node that function ships file ops (read, write, ioctl, etc.) to the node where the object exists. In a nutshell, you can just open the tty, migrate the process, and transparently continue to do I/O through the same tty. Note that a "home" node is a Mosix-ism that doesn't apply to OpenSSI clustering. Under Mosix, a migrating process leaves behind its kernel portion on its home node. If the home node crashes, the process must die. Under OpenSSI, a migrating process leaves behind no part of itself. It does leave a forwarding address for anyone trying to signal it, and it may have some objects (devices, pipes, sockets, etc.) open on the old node, but the process itself is completely migrated. If the old node crashes, the process continues running, although it will get errors if it tries to access an object that existed on the old node. > That's done by making /dev/tty clusterwide. That means the names are > unique clusterwide and you can write to these device from any node in > the cluster. (drivers/char/tty* ) Aneesh's response is correct if what you want to do is open a tty on another node. I think this feature is already available by opening the appropriate device under /devfs/nodeX. -Brian |
From: Aneesh K. K.V <ane...@di...> - 2002-12-03 05:28:35
|
On Tue, 2002-12-03 at 01:08, Brian J. Watson wrote: > > > I am working on process migration in linux, so when the proces migrates to > > > the remote machine then how can i send/recieve the ttysI/O to/from the home > > > node. > > If you have a tty open before migrating, you will have the same tty open > after you migrate. Our process migration model is almost completely > transparent to a process, so that unmodified apps can be migrated. Part > of making it transparent is making sure the process still has its > complete set of file descriptors (fd), and that each fd connects to the > same file, device, pipe, or socket that it connected to on the original > node. For devices, pipes, and sockets, this is done by setting up a > special client file structure on the new node that function ships file > ops (read, write, ioctl, etc.) to the node where the object exists. > > In a nutshell, you can just open the tty, migrate the process, and > transparently continue to do I/O through the same tty. > > Note that a "home" node is a Mosix-ism that doesn't apply to OpenSSI > clustering. Under Mosix, a migrating process leaves behind its kernel > portion on its home node. If the home node crashes, the process must > die. > > Under OpenSSI, a migrating process leaves behind no part of itself. It > does leave a forwarding address for anyone trying to signal it, and it > may have some objects (devices, pipes, sockets, etc.) open on the old > node, but the process itself is completely migrated. If the old node > crashes, the process continues running, although it will get errors if > it tries to access an object that existed on the old node. > > > That's done by making /dev/tty clusterwide. That means the names are > > unique clusterwide and you can write to these device from any node in > > the cluster. (drivers/char/tty* ) > > Aneesh's response is correct if what you want to do is open a tty on > another node. I think this feature is already available by opening the > appropriate device under /devfs/nodeX. > Now Guys will be wondering what is that ? :) Ok i was wrong in my previous mail. I was under the impression it was handled by clusterwide devfs support . Ok I will explain what the above two methods are. Correct me if i am wrong. 1) SSI support remote file operations. That is when a application migrate the related resources that can't migrate like socket and tty ( which have the file_operations struct ) are now moved under the control of the remote file operations support. That means from any node using remote file operation service I can call these file operations. Now when the process migrated the file_operations pertaining to the open descriptor( that can't move to the new node ) is replaces with a one that will make use of this remote file operations. and This is how /dev/tty is supported. 2) Clusterwide device. This is right now implemented as an extension to devfs and is going to be replaces using CDSL and CFS . It makes all the devices attached to the node available clusterwide /devfs/node1/.... /devfs/node2/.... -aneesh |
From: Qamer F. <qam...@ms...> - 2002-12-02 10:56:29
|
Hi I am doing my Final Project on process migration in linux and i want help on process migration and want to know that how can i provide the resources on the remote node after the migration. Please tell me how can i execute a process successfully on the remote node. Thanx in advance. Take Care Regards Qamer Farooq [Graduate student] _________________________________________________________________ Add photos to your messages with MSN 8. Get 2 months FREE*. http://join.msn.com/?page=features/featuredemail |
From: Aneesh K. K.V <ane...@di...> - 2002-12-02 10:17:41
|
Hi, ( I have sent some mails before tracking the call. See the archive. ) All the magic you can figure out by looking at migrate() ( sys_migrate and dvp_migrate ) call. :) First SSI support an infrastructure for calling functions/procedure on another node. So when you issue a migrate call on node1 to migrate to node2, it is going to ask node2 to pull itself from node1. Now the core part is under . as_untranscribe() ( cluster/ssi/vproc/as_xscribe.c:610 ). first he pulls the vma information as_do_vma and do a local mmap at the same virtual address. This involves anonymous mapping also. Then he go and setup page tables ( as_do_pg ). I guess it is at this point the real pages comes in . Here he will get only anonymous mapping . Those mapping that have file associated are not taken care. Since for those virtual address he will page fault and and are brought to the memory by the pagefault handler of the OS. This is because we have cluster filesystem and all files are available on all nodes. At this stage he goes and fetch even pages residing in the swap on the remote node. ( This is different from mosix i guess mosix does this on demand. But here we are no more dependent on the origin node after a migrate but in case of mosix we are and because of this the origin node can go away after a migrate in the case of SSI) After that on node2 you just setup the stack ( this is were the architecture dependency comes in and return from the kernel as if a normal return from the system call ret_from_rproc. Now how the resources are handled. That is what SSI code is fully about. All the best in hacking !!!! :) -aneesh On Mon, 2002-12-02 at 14:34, Qamer Farooq wrote: > > > > Hi, > I am student of masters of computer science and doing my final project > on > > process migration in Linux,So, i want some help & suggestions on process > > migration, that how can i go further to implement the process migration. > > > So, please tell me how to migrate the process on remote side and how to > > handle the process after transfer to the other machine. > > So, please help me > > Thanx in advance > My email is qam...@ya... > > > > > > > > > _________________________________________________________________ > STOP MORE SPAM with the new MSN 8 and get 2 months FREE* > http://join.msn.com/?page=features/junkmail > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > ssic-linux-devel mailing list > ssi...@li... > https://lists.sourceforge.net/lists/listinfo/ssic-linux-devel |
From: <mar...@pv...> - 2002-12-02 11:44:37
|
Quoting "Aneesh Kumar K.V" <ane...@di...>: > > First SSI support an infrastructure for calling functions/procedure > on another node. So when you issue a migrate call on node1 to > migrate to node2, it is going to ask node2 to pull itself from > node1. Thanks for a very informative explanation on migration on SSI. How is the migration call triggered? Does the application have to do it by itself (and must thus be specially written for SSI-migration), or can the system force migration on standard applications (so they can remain unchanged from their non-SSI version)? In other words: Do I need to write/link my applications in a special way in order to benefit from the migration? (Compared to (open)Mosix, where applications are migratable without any modifications). Regards, Martin -- "Computer science is not about computers any more than astronomy is about telescopes." -- EW Dijkstra |