You can subscribe to this list here.
2008 |
Jan
|
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
(9) |
Jul
|
Aug
|
Sep
(2) |
Oct
(9) |
Nov
|
Dec
(2) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
(2) |
Feb
(2) |
Mar
(7) |
Apr
(1) |
May
(62) |
Jun
(10) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
(5) |
Dec
(10) |
2010 |
Jan
(6) |
Feb
(4) |
Mar
|
Apr
(7) |
May
(2) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Vigneswaran R <vi...@at...> - 2010-06-11 12:52:33
|
Dear Kim-san, Thank you for the information. We will explore on this and if we have any doubt, we will get back to you. Regards, Vignesh On Friday 11 June 2010 04:43 PM, 金 成昊 wrote: > Dear Vigneswaran-san, > >> How are you? Hope, your vacation was good. > > I apologize the delay of all schedule. > I will try to catch on it. > Actually, I has been occupied with the other business very tightly. > Anyway, I will do my best. > >> In the mean time, could you please brief us the overall picture how >> you are planning to add KVM support. This will help us in >> understanding the code better. > > Here brief plot. > 1. Qemu start VM > 2. VM probes guest vsp > 3. guest vsp initialize and map ioport (communication channel) of vsp in > qemu (you can see "ioremap") > 4. KVM switch Qemu and vsp in qemu is notified by guest vsp ioremap. > 5. vsp in qemu remaps io handler with the handler of vsp in qemu. > 6. vsp in guest prepare shared-ring(vring) and send the info via "vq->kick" > 7. vsp in qemu write module image to the vring. > 8. then vsp in qemu inject irq via KVM > 9. vsp in guest read image from the vring > 10. vsp in guest inject the module into guest kernel > 11. modules in guest gather probe info. > 12. vsp write the probe info onto scatter-list and add the list to vring > 13. vsp in guest then send the inof via "vq->kick" again > 14. vsp in qemu read the probe data from vring > 15. vsp in qemu write the data into host relay via vsp module in the > host kernel > 16 then, cimha is notified some probe data wrote in the relay file > "/sys/debug/kernel/..../xxxprobe" > > Any Question will be welcome. > Thank you. > Best Regards, > Kim > > Vigneswaran R さんは書きました: >> Dear Kim-san, >> >> How are you? Hope, your vacation was good. >> >> We are going through the Vesper KVM code that you have sent. We are >> eagerly waiting for the remaining code. >> >> In the mean time, could you please brief us the overall picture how >> you are planning to add KVM support. This will help us in >> understanding the code better. >> >> Thank you. >> >> Regards, >> Vignesh > > |
From: 金 成昊 <sun...@hi...> - 2010-06-11 11:30:41
|
Dear Vigneswaran-san, > How are you? Hope, your vacation was good. I apologize the delay of all schedule. I will try to catch on it. Actually, I has been occupied with the other business very tightly. Anyway, I will do my best. > In the mean time, could you please brief us the overall picture how > you are planning to add KVM support. This will help us in > understanding the code better. Here brief plot. 1. Qemu start VM 2. VM probes guest vsp 3. guest vsp initialize and map ioport (communication channel) of vsp in qemu (you can see "ioremap") 4. KVM switch Qemu and vsp in qemu is notified by guest vsp ioremap. 5. vsp in qemu remaps io handler with the handler of vsp in qemu. 6. vsp in guest prepare shared-ring(vring) and send the info via "vq->kick" 7. vsp in qemu write module image to the vring. 8. then vsp in qemu inject irq via KVM 9. vsp in guest read image from the vring 10. vsp in guest inject the module into guest kernel 11. modules in guest gather probe info. 12. vsp write the probe info onto scatter-list and add the list to vring 13. vsp in guest then send the inof via "vq->kick" again 14. vsp in qemu read the probe data from vring 15. vsp in qemu write the data into host relay via vsp module in the host kernel 16 then, cimha is notified some probe data wrote in the relay file "/sys/debug/kernel/..../xxxprobe" Any Question will be welcome. Thank you. Best Regards, Kim Vigneswaran R さんは書きました: > Dear Kim-san, > > How are you? Hope, your vacation was good. > > We are going through the Vesper KVM code that you have sent. We are > eagerly waiting for the remaining code. > > In the mean time, could you please brief us the overall picture how > you are planning to add KVM support. This will help us in > understanding the code better. > > Thank you. > > Regards, > Vignesh -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2010-06-11 09:19:47
|
Dear Kim-san, How are you? Hope, your vacation was good. We are going through the Vesper KVM code that you have sent. We are eagerly waiting for the remaining code. In the mean time, could you please brief us the overall picture how you are planning to add KVM support. This will help us in understanding the code better. Thank you. Regards, Vignesh |
From: Vigneswaran R <vi...@at...> - 2010-05-14 13:01:27
|
Dear Kim-san, On Tuesday 11 May 2010 04:06 PM, Vigneswaran R wrote: > Dear Kim-san, > > How are you and how is your vacation going on? > > We were testing the Cimha Vesper live DVD based on Fedora 7 on various > machines. We observed that Fedora 7 doesn't have network drivers for the > latest machines. Have you ever faced that problem and solved it? If so, > please let us know. We are working on that. We observed that the problem occurs with latest Intel NICs which require "e1000e" driver. We downloaded the source and compiled it. Now the live CD is identifying the network card :-). Thank you. Best Regards, Vignesh > Thank you. > > Best Regards, > Vignesh > > On Monday 26 April 2010 09:20 AM, Vigneswaran R wrote: >> Dear Kim-san, >> >> Thank you very much for your efforts. We will try this and get back to you. >> >> Best Regards, >> Vignesh >> >> On Monday 26 April 2010 06:15 AM, 金 成昊 wrote: >>> Dear Vigneswaran-san, >>> >>> Here is guest-part driver part4. >>> Thank you. >>> Best Regards, >>> Kim >>> >>> >>> ------------------------------------------------------------------- >>> file: relay_test.c<TODO: filename change& cleanup> >>> description: relay operation for guest driver >>> ------------------------------------------------------------------- >>> >>> #include<linux/spinlock.h> >>> #include<linux/err.h> >>> #include<linux/uaccess.h> >>> #include<linux/debugfs.h> >>> #include "virtio_relay.h" >>> >>> >>> /* relay related */ >>> static char *relay_dirname = "virtio-vsp"; >>> static char *relay_basename = "cpu"; >>> static struct dentry *parent_dir = NULL; >>> >>> static struct dentry *create_buf_file_callback( >>> const char *fname, >>> struct dentry *parent, int mode, >>> struct rchan_buf *buf, int *is_global) >>> { >>> printk("[%s] create_buf_file[%s]\n", __FUNCTION__, fname); >>> return debugfs_create_file( >>> fname, mode, >>> parent, buf, >>> &relay_file_operations); >>> } >>> >>> static int remove_buf_file_callback(struct dentry *dentry) >>> { >>> debugfs_remove(dentry); >>> return 0; >>> } >>> >>> static int subbuf_start_callback(struct rchan_buf *buf, >>> void *subbuf, >>> void *prev_subbuf, >>> size_t prev_padding) >>> { >>> printk("guest-relay: subbuf start\n"); >>> return 1; >>> } >>> >>> static struct rchan_callbacks relay_callbacks = { >>> .subbuf_start = subbuf_start_callback, >>> .create_buf_file = create_buf_file_callback, >>> .remove_buf_file = remove_buf_file_callback, >>> }; >>> >>> /* export symbol */ >>> struct rchan *relay_init(size_t subbuf_size, size_t nr_subbufs) >>> { >>> struct rchan *rchan; >>> parent_dir = debugfs_create_dir(relay_dirname, NULL); >>> rchan = relay_open(relay_basename, parent_dir, >>> subbuf_size, nr_subbufs,&relay_callbacks, 0); >>> return rchan; >>> } >>> >>> void relay_cleanup(struct rchan *rchan) >>> { >>> if (rchan) >>> relay_close(rchan); >>> if (parent_dir) >>> debugfs_remove(parent_dir); >>> } >>> >>> static size_t relay_read_start_pos(struct rchan_buf *buf) >>> { >>> size_t padding, padding_start, padding_end; >>> size_t read_subbuf, read_pos; >>> /* first retreive read start pos */ >>> size_t subbuf_size = buf->chan->subbuf_size; >>> size_t n_subbufs = buf->chan->n_subbufs; >>> size_t consumed = buf->subbufs_consumed % n_subbufs; >>> >>> read_pos = consumed * subbuf_size + buf->bytes_consumed; >>> /* modify read_pos */ >>> read_subbuf = read_pos / subbuf_size; >>> padding = buf->padding[read_subbuf]; >>> padding_start = (read_subbuf +1) * subbuf_size - padding; >>> padding_end = (read_subbuf + 1) *subbuf_size; >>> if (read_pos>= padding_start&& read_pos< padding_end) { >>> read_subbuf = (read_subbuf + 1) % n_subbufs; >>> read_pos = read_subbuf * subbuf_size; >>> } >>> return read_pos; >>> } >>> >>> static size_t relay_read_subbuf_avail(size_t read_pos, struct rchan_buf >>> *buf) >>> { >>> size_t padding, avail = 0; >>> size_t read_subbuf, read_offset, write_subbuf, write_offset; >>> size_t subbuf_size = buf->chan->subbuf_size; >>> >>> write_subbuf = (buf->data - buf->start) / subbuf_size; >>> write_offset = buf->offset> subbuf_size ? subbuf_size : buf->offset; >>> read_subbuf = read_pos / subbuf_size; >>> read_offset = read_pos % subbuf_size; >>> padding = buf->padding[read_subbuf]; >>> >>> if (read_subbuf == write_subbuf) { >>> if (read_offset + padding< write_offset) >>> avail = write_offset - (read_offset + padding); >>> } else >>> avail = (subbuf_size - padding) - read_offset; >>> >>> return avail; >>> } >>> >>> int relay_read_info(struct rchan_buf *buf, size_t *read_pos, size_t *avail) >>> { >>> size_t pos = relay_read_start_pos(buf); >>> size_t len = relay_read_subbuf_avail(pos, buf); >>> *read_pos = pos; >>> *avail = len; >>> return 0; >>> } >>> >>> void relay_read_consume(struct rchan_buf *buf, >>> size_t read_pos, >>> size_t bytes_consumed) >>> { >>> size_t subbuf_size = buf->chan->subbuf_size; >>> size_t n_subbufs = buf->chan->n_subbufs; >>> size_t read_subbuf; >>> if (buf->subbufs_produced == buf->subbufs_consumed&& >>> buf->offset == buf->bytes_consumed) >>> return; >>> >>> if (buf->bytes_consumed + bytes_consumed> subbuf_size) { >>> relay_subbufs_consumed(buf->chan, buf->cpu, 1); >>> buf->bytes_consumed = 0; >>> } >>> buf->bytes_consumed += bytes_consumed; >>> if (!read_pos) >>> read_subbuf = buf->subbufs_consumed % n_subbufs; >>> else >>> read_subbuf = read_pos / subbuf_size; >>> >>> if (buf->bytes_consumed + buf->padding[read_subbuf] == subbuf_size) { >>> if ((read_subbuf == buf->subbufs_produced % n_subbufs)&& >>> (buf->offset == subbuf_size)) >>> return; >>> relay_subbufs_consumed(buf->chan, buf->cpu, 1); >>> buf->bytes_consumed = 0; >>> } >>> } >>> >>> struct page *relay_get_page(struct rchan_buf *buf, >>> void *start, >>> size_t read_pos) >>> { >>> int index = 0; >>> struct page **page_array = buf->page_array; >>> if (!page_array) >>> return NULL; >>> if (read_pos> buf->chan->alloc_size) >>> return NULL; >>> >>> index = read_pos / PAGE_SIZE; >>> return (page_array[index]); >>> } >>> >>> >> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Vesper-users mailing list >> Ves...@li... >> https://lists.sourceforge.net/lists/listinfo/vesper-users > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Vesper-users mailing list > Ves...@li... > https://lists.sourceforge.net/lists/listinfo/vesper-users |
From: Vigneswaran R <vi...@at...> - 2010-05-11 10:36:06
|
Dear Kim-san, How are you and how is your vacation going on? We were testing the Cimha Vesper live DVD based on Fedora 7 on various machines. We observed that Fedora 7 doesn't have network drivers for the latest machines. Have you ever faced that problem and solved it? If so, please let us know. We are working on that. Thank you. Best Regards, Vignesh On Monday 26 April 2010 09:20 AM, Vigneswaran R wrote: > Dear Kim-san, > > Thank you very much for your efforts. We will try this and get back to you. > > Best Regards, > Vignesh > > On Monday 26 April 2010 06:15 AM, 金 成昊 wrote: >> Dear Vigneswaran-san, >> >> Here is guest-part driver part4. >> Thank you. >> Best Regards, >> Kim >> >> >> ------------------------------------------------------------------- >> file: relay_test.c<TODO: filename change& cleanup> >> description: relay operation for guest driver >> ------------------------------------------------------------------- >> >> #include<linux/spinlock.h> >> #include<linux/err.h> >> #include<linux/uaccess.h> >> #include<linux/debugfs.h> >> #include "virtio_relay.h" >> >> >> /* relay related */ >> static char *relay_dirname = "virtio-vsp"; >> static char *relay_basename = "cpu"; >> static struct dentry *parent_dir = NULL; >> >> static struct dentry *create_buf_file_callback( >> const char *fname, >> struct dentry *parent, int mode, >> struct rchan_buf *buf, int *is_global) >> { >> printk("[%s] create_buf_file[%s]\n", __FUNCTION__, fname); >> return debugfs_create_file( >> fname, mode, >> parent, buf, >> &relay_file_operations); >> } >> >> static int remove_buf_file_callback(struct dentry *dentry) >> { >> debugfs_remove(dentry); >> return 0; >> } >> >> static int subbuf_start_callback(struct rchan_buf *buf, >> void *subbuf, >> void *prev_subbuf, >> size_t prev_padding) >> { >> printk("guest-relay: subbuf start\n"); >> return 1; >> } >> >> static struct rchan_callbacks relay_callbacks = { >> .subbuf_start = subbuf_start_callback, >> .create_buf_file = create_buf_file_callback, >> .remove_buf_file = remove_buf_file_callback, >> }; >> >> /* export symbol */ >> struct rchan *relay_init(size_t subbuf_size, size_t nr_subbufs) >> { >> struct rchan *rchan; >> parent_dir = debugfs_create_dir(relay_dirname, NULL); >> rchan = relay_open(relay_basename, parent_dir, >> subbuf_size, nr_subbufs,&relay_callbacks, 0); >> return rchan; >> } >> >> void relay_cleanup(struct rchan *rchan) >> { >> if (rchan) >> relay_close(rchan); >> if (parent_dir) >> debugfs_remove(parent_dir); >> } >> >> static size_t relay_read_start_pos(struct rchan_buf *buf) >> { >> size_t padding, padding_start, padding_end; >> size_t read_subbuf, read_pos; >> /* first retreive read start pos */ >> size_t subbuf_size = buf->chan->subbuf_size; >> size_t n_subbufs = buf->chan->n_subbufs; >> size_t consumed = buf->subbufs_consumed % n_subbufs; >> >> read_pos = consumed * subbuf_size + buf->bytes_consumed; >> /* modify read_pos */ >> read_subbuf = read_pos / subbuf_size; >> padding = buf->padding[read_subbuf]; >> padding_start = (read_subbuf +1) * subbuf_size - padding; >> padding_end = (read_subbuf + 1) *subbuf_size; >> if (read_pos>= padding_start&& read_pos< padding_end) { >> read_subbuf = (read_subbuf + 1) % n_subbufs; >> read_pos = read_subbuf * subbuf_size; >> } >> return read_pos; >> } >> >> static size_t relay_read_subbuf_avail(size_t read_pos, struct rchan_buf >> *buf) >> { >> size_t padding, avail = 0; >> size_t read_subbuf, read_offset, write_subbuf, write_offset; >> size_t subbuf_size = buf->chan->subbuf_size; >> >> write_subbuf = (buf->data - buf->start) / subbuf_size; >> write_offset = buf->offset> subbuf_size ? subbuf_size : buf->offset; >> read_subbuf = read_pos / subbuf_size; >> read_offset = read_pos % subbuf_size; >> padding = buf->padding[read_subbuf]; >> >> if (read_subbuf == write_subbuf) { >> if (read_offset + padding< write_offset) >> avail = write_offset - (read_offset + padding); >> } else >> avail = (subbuf_size - padding) - read_offset; >> >> return avail; >> } >> >> int relay_read_info(struct rchan_buf *buf, size_t *read_pos, size_t *avail) >> { >> size_t pos = relay_read_start_pos(buf); >> size_t len = relay_read_subbuf_avail(pos, buf); >> *read_pos = pos; >> *avail = len; >> return 0; >> } >> >> void relay_read_consume(struct rchan_buf *buf, >> size_t read_pos, >> size_t bytes_consumed) >> { >> size_t subbuf_size = buf->chan->subbuf_size; >> size_t n_subbufs = buf->chan->n_subbufs; >> size_t read_subbuf; >> if (buf->subbufs_produced == buf->subbufs_consumed&& >> buf->offset == buf->bytes_consumed) >> return; >> >> if (buf->bytes_consumed + bytes_consumed> subbuf_size) { >> relay_subbufs_consumed(buf->chan, buf->cpu, 1); >> buf->bytes_consumed = 0; >> } >> buf->bytes_consumed += bytes_consumed; >> if (!read_pos) >> read_subbuf = buf->subbufs_consumed % n_subbufs; >> else >> read_subbuf = read_pos / subbuf_size; >> >> if (buf->bytes_consumed + buf->padding[read_subbuf] == subbuf_size) { >> if ((read_subbuf == buf->subbufs_produced % n_subbufs)&& >> (buf->offset == subbuf_size)) >> return; >> relay_subbufs_consumed(buf->chan, buf->cpu, 1); >> buf->bytes_consumed = 0; >> } >> } >> >> struct page *relay_get_page(struct rchan_buf *buf, >> void *start, >> size_t read_pos) >> { >> int index = 0; >> struct page **page_array = buf->page_array; >> if (!page_array) >> return NULL; >> if (read_pos> buf->chan->alloc_size) >> return NULL; >> >> index = read_pos / PAGE_SIZE; >> return (page_array[index]); >> } >> >> > > > ------------------------------------------------------------------------------ > _______________________________________________ > Vesper-users mailing list > Ves...@li... > https://lists.sourceforge.net/lists/listinfo/vesper-users |
From: Vigneswaran R <vi...@at...> - 2010-04-26 03:48:38
|
Dear Kim-san, Thank you very much for your efforts. We will try this and get back to you. Best Regards, Vignesh On Monday 26 April 2010 06:15 AM, 金 成昊 wrote: > Dear Vigneswaran-san, > > Here is guest-part driver part4. > Thank you. > Best Regards, > Kim > > > ------------------------------------------------------------------- > file: relay_test.c<TODO: filename change& cleanup> > description: relay operation for guest driver > ------------------------------------------------------------------- > > #include<linux/spinlock.h> > #include<linux/err.h> > #include<linux/uaccess.h> > #include<linux/debugfs.h> > #include "virtio_relay.h" > > > /* relay related */ > static char *relay_dirname = "virtio-vsp"; > static char *relay_basename = "cpu"; > static struct dentry *parent_dir = NULL; > > static struct dentry *create_buf_file_callback( > const char *fname, > struct dentry *parent, int mode, > struct rchan_buf *buf, int *is_global) > { > printk("[%s] create_buf_file[%s]\n", __FUNCTION__, fname); > return debugfs_create_file( > fname, mode, > parent, buf, > &relay_file_operations); > } > > static int remove_buf_file_callback(struct dentry *dentry) > { > debugfs_remove(dentry); > return 0; > } > > static int subbuf_start_callback(struct rchan_buf *buf, > void *subbuf, > void *prev_subbuf, > size_t prev_padding) > { > printk("guest-relay: subbuf start\n"); > return 1; > } > > static struct rchan_callbacks relay_callbacks = { > .subbuf_start = subbuf_start_callback, > .create_buf_file = create_buf_file_callback, > .remove_buf_file = remove_buf_file_callback, > }; > > /* export symbol */ > struct rchan *relay_init(size_t subbuf_size, size_t nr_subbufs) > { > struct rchan *rchan; > parent_dir = debugfs_create_dir(relay_dirname, NULL); > rchan = relay_open(relay_basename, parent_dir, > subbuf_size, nr_subbufs,&relay_callbacks, 0); > return rchan; > } > > void relay_cleanup(struct rchan *rchan) > { > if (rchan) > relay_close(rchan); > if (parent_dir) > debugfs_remove(parent_dir); > } > > static size_t relay_read_start_pos(struct rchan_buf *buf) > { > size_t padding, padding_start, padding_end; > size_t read_subbuf, read_pos; > /* first retreive read start pos */ > size_t subbuf_size = buf->chan->subbuf_size; > size_t n_subbufs = buf->chan->n_subbufs; > size_t consumed = buf->subbufs_consumed % n_subbufs; > > read_pos = consumed * subbuf_size + buf->bytes_consumed; > /* modify read_pos */ > read_subbuf = read_pos / subbuf_size; > padding = buf->padding[read_subbuf]; > padding_start = (read_subbuf +1) * subbuf_size - padding; > padding_end = (read_subbuf + 1) *subbuf_size; > if (read_pos>= padding_start&& read_pos< padding_end) { > read_subbuf = (read_subbuf + 1) % n_subbufs; > read_pos = read_subbuf * subbuf_size; > } > return read_pos; > } > > static size_t relay_read_subbuf_avail(size_t read_pos, struct rchan_buf > *buf) > { > size_t padding, avail = 0; > size_t read_subbuf, read_offset, write_subbuf, write_offset; > size_t subbuf_size = buf->chan->subbuf_size; > > write_subbuf = (buf->data - buf->start) / subbuf_size; > write_offset = buf->offset> subbuf_size ? subbuf_size : buf->offset; > read_subbuf = read_pos / subbuf_size; > read_offset = read_pos % subbuf_size; > padding = buf->padding[read_subbuf]; > > if (read_subbuf == write_subbuf) { > if (read_offset + padding< write_offset) > avail = write_offset - (read_offset + padding); > } else > avail = (subbuf_size - padding) - read_offset; > > return avail; > } > > int relay_read_info(struct rchan_buf *buf, size_t *read_pos, size_t *avail) > { > size_t pos = relay_read_start_pos(buf); > size_t len = relay_read_subbuf_avail(pos, buf); > *read_pos = pos; > *avail = len; > return 0; > } > > void relay_read_consume(struct rchan_buf *buf, > size_t read_pos, > size_t bytes_consumed) > { > size_t subbuf_size = buf->chan->subbuf_size; > size_t n_subbufs = buf->chan->n_subbufs; > size_t read_subbuf; > if (buf->subbufs_produced == buf->subbufs_consumed&& > buf->offset == buf->bytes_consumed) > return; > > if (buf->bytes_consumed + bytes_consumed> subbuf_size) { > relay_subbufs_consumed(buf->chan, buf->cpu, 1); > buf->bytes_consumed = 0; > } > buf->bytes_consumed += bytes_consumed; > if (!read_pos) > read_subbuf = buf->subbufs_consumed % n_subbufs; > else > read_subbuf = read_pos / subbuf_size; > > if (buf->bytes_consumed + buf->padding[read_subbuf] == subbuf_size) { > if ((read_subbuf == buf->subbufs_produced % n_subbufs)&& > (buf->offset == subbuf_size)) > return; > relay_subbufs_consumed(buf->chan, buf->cpu, 1); > buf->bytes_consumed = 0; > } > } > > struct page *relay_get_page(struct rchan_buf *buf, > void *start, > size_t read_pos) > { > int index = 0; > struct page **page_array = buf->page_array; > if (!page_array) > return NULL; > if (read_pos> buf->chan->alloc_size) > return NULL; > > index = read_pos / PAGE_SIZE; > return (page_array[index]); > } > > |
From: 金 成昊 <sun...@hi...> - 2010-04-26 00:45:25
|
Dear Vigneswaran-san, Here is guest-part driver part4. Thank you. Best Regards, Kim ------------------------------------------------------------------- file: relay_test.c <TODO: filename change & cleanup> description: relay operation for guest driver ------------------------------------------------------------------- #include <linux/spinlock.h> #include <linux/err.h> #include <linux/uaccess.h> #include <linux/debugfs.h> #include "virtio_relay.h" /* relay related */ static char *relay_dirname = "virtio-vsp"; static char *relay_basename = "cpu"; static struct dentry *parent_dir = NULL; static struct dentry *create_buf_file_callback( const char *fname, struct dentry *parent, int mode, struct rchan_buf *buf, int *is_global) { printk("[%s] create_buf_file[%s]\n", __FUNCTION__, fname); return debugfs_create_file( fname, mode, parent, buf, &relay_file_operations); } static int remove_buf_file_callback(struct dentry *dentry) { debugfs_remove(dentry); return 0; } static int subbuf_start_callback(struct rchan_buf *buf, void *subbuf, void *prev_subbuf, size_t prev_padding) { printk("guest-relay: subbuf start\n"); return 1; } static struct rchan_callbacks relay_callbacks = { .subbuf_start = subbuf_start_callback, .create_buf_file = create_buf_file_callback, .remove_buf_file = remove_buf_file_callback, }; /* export symbol */ struct rchan *relay_init(size_t subbuf_size, size_t nr_subbufs) { struct rchan *rchan; parent_dir = debugfs_create_dir(relay_dirname, NULL); rchan = relay_open(relay_basename, parent_dir, subbuf_size, nr_subbufs, &relay_callbacks, 0); return rchan; } void relay_cleanup(struct rchan *rchan) { if (rchan) relay_close(rchan); if (parent_dir) debugfs_remove(parent_dir); } static size_t relay_read_start_pos(struct rchan_buf *buf) { size_t padding, padding_start, padding_end; size_t read_subbuf, read_pos; /* first retreive read start pos */ size_t subbuf_size = buf->chan->subbuf_size; size_t n_subbufs = buf->chan->n_subbufs; size_t consumed = buf->subbufs_consumed % n_subbufs; read_pos = consumed * subbuf_size + buf->bytes_consumed; /* modify read_pos */ read_subbuf = read_pos / subbuf_size; padding = buf->padding[read_subbuf]; padding_start = (read_subbuf +1) * subbuf_size - padding; padding_end = (read_subbuf + 1) *subbuf_size; if (read_pos >= padding_start && read_pos < padding_end) { read_subbuf = (read_subbuf + 1) % n_subbufs; read_pos = read_subbuf * subbuf_size; } return read_pos; } static size_t relay_read_subbuf_avail(size_t read_pos, struct rchan_buf *buf) { size_t padding, avail = 0; size_t read_subbuf, read_offset, write_subbuf, write_offset; size_t subbuf_size = buf->chan->subbuf_size; write_subbuf = (buf->data - buf->start) / subbuf_size; write_offset = buf->offset > subbuf_size ? subbuf_size : buf->offset; read_subbuf = read_pos / subbuf_size; read_offset = read_pos % subbuf_size; padding = buf->padding[read_subbuf]; if (read_subbuf == write_subbuf) { if (read_offset + padding < write_offset) avail = write_offset - (read_offset + padding); } else avail = (subbuf_size - padding) - read_offset; return avail; } int relay_read_info(struct rchan_buf *buf, size_t *read_pos, size_t *avail) { size_t pos = relay_read_start_pos(buf); size_t len = relay_read_subbuf_avail(pos, buf); *read_pos = pos; *avail = len; return 0; } void relay_read_consume(struct rchan_buf *buf, size_t read_pos, size_t bytes_consumed) { size_t subbuf_size = buf->chan->subbuf_size; size_t n_subbufs = buf->chan->n_subbufs; size_t read_subbuf; if (buf->subbufs_produced == buf->subbufs_consumed && buf->offset == buf->bytes_consumed) return; if (buf->bytes_consumed + bytes_consumed > subbuf_size) { relay_subbufs_consumed(buf->chan, buf->cpu, 1); buf->bytes_consumed = 0; } buf->bytes_consumed += bytes_consumed; if (!read_pos) read_subbuf = buf->subbufs_consumed % n_subbufs; else read_subbuf = read_pos / subbuf_size; if (buf->bytes_consumed + buf->padding[read_subbuf] == subbuf_size) { if ((read_subbuf == buf->subbufs_produced % n_subbufs) && (buf->offset == subbuf_size)) return; relay_subbufs_consumed(buf->chan, buf->cpu, 1); buf->bytes_consumed = 0; } } struct page *relay_get_page(struct rchan_buf *buf, void *start, size_t read_pos) { int index = 0; struct page **page_array = buf->page_array; if (!page_array) return NULL; if (read_pos > buf->chan->alloc_size) return NULL; index = read_pos / PAGE_SIZE; return (page_array[index]); } -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: 金 成昊 <sun...@hi...> - 2010-04-26 00:41:40
|
Dear Vigneswaran-san, Here is guest-part driver part3. Thank you. Best Regards, Kim ------------------------------------------------------------------- file: Makefile description: ------------------------------------------------------------------- EXTRA_CFLAGS := -I. #backend driver obj-m := virtio_listen.o virtio_listen-y := virtio_vsp.o relay_test.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) #make -C $(KDIR) ->change dir $KDIR #SUBDIRS->options #simply put, make moudles default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules clean: rm -f Module.symvers rm -rf *.ko rm -rf *.mod.* rm -rf .*.cmd rm -rf *.o -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: 金 成昊 <sun...@hi...> - 2010-04-26 00:36:47
|
Dear Vigneswaran-san, Here is guest-part driver part2. Thank you. Best Regards, Kim ------------------------------------------------------------------- file: virtio_vsp.c description: main parts ------------------------------------------------------------------- #include <linux/spinlock.h> #include <linux/virtio.h> #include <linux/scatterlist.h> #include <linux/mempool.h> #include <linux/proc_fs.h> #include <linux/completion.h> #include <linux/kthread.h> #include <linux/err.h> #include <linux/uaccess.h> #include "virtio_vsp.h" #include "virtio_relay.h" /* mod info */ #define MODNAME "virtio-vsp-guest" #define VERSION "0.0.1" #define VIRTIO_ECHO_MSGLEN 128 /* echonig relay data to host relay */ struct virtio_echo { spinlock_t lock; struct virtio_device *vdev; struct virtqueue *vq; /* Request tracking. */ struct list_head reqs; /* FIXME: only one probe allowed */ //struct list_head req_rchans; struct virtecho_req_rchan *vrr; /* pool for performance enhancement */ mempool_t *pool; /* What host tells us, plus 2 for header & tailer. */ unsigned int sg_elems; /* Scatterlist: can be too big for stack. */ struct scatterlist sg[/*sg_elems*/]; }; /* echo msg element */ struct virtecho_msg { u8 len; char *msg; }; struct virtecho_req { struct list_head list; struct virtecho_msg *req; struct rchan *rchan; struct virtio_echo_outhdr out_hdr; u8 status; }; struct virtecho_req_rchan { struct list_head list; char mod_name[NAME_MAX]; u8 nr_rchan_buf; struct rchan *rchan; }; /* common dev */ struct virtio_device *g_vdev; /* working */ static char *inbuf; static uint32_t inbuf_len; /* sync */ DECLARE_COMPLETION(virtecho_completion); wait_queue_head_t virtecho_rsp_wq; unsigned int virtecho_wait_rsps = 0; static void virtecho_wakeup(void); static int virtecho_init(struct virtio_device *vdev); static int virtecho_exit(struct virtio_device *vdev); /* callback by interrupt handler of virtio */ static void echo_done(struct virtqueue *vq) { struct virtio_echo *vecho = vq->vdev->priv; struct virtecho_req *ver; unsigned int len; unsigned long flags; printk("echo_done\n"); spin_lock_irqsave(&vecho->lock, flags); while ((ver = vecho->vq->vq_ops->get_buf(vecho->vq, &len)) != NULL) { int error; printk("rchan[%pF] cleanup\n", ver->rchan); switch (ver->status) { /* all req have rchan for reference */ case VIRTIO_ECHO_S_OK: inbuf_len = len; if (ver->out_hdr.type & VIRTIO_VSP_T_FIN) { /* clean up relay */ if (ver->rchan) { //relay_cleanup(ver->rchan); //kfree(vecho->vrr); //vecho->vrr = NULL; } } error = 0; break; case VIRTIO_ECHO_S_UNSUPP: error = -ENOTTY; break; default: error = -EIO; break; } list_del(&ver->list); /* FIXME: only one probe allowed */ mempool_free(ver, vecho->pool); } spin_unlock_irqrestore(&vecho->lock, flags); #if 1 /* notify completion work-thread */ printk("echo_done complete\n"); complete(&virtecho_completion); #endif } static bool do_req(struct virtio_echo *vecho, struct virtecho_msg *req) { unsigned long num, out, in; struct virtecho_req *ver; ver = mempool_alloc(vecho->pool, GFP_ATOMIC); if (!ver) /* When another request finishes we'll try again. */ return false; ver->req = req; /* set header */ { ver->out_hdr.type = VIRTIO_VSP_T_OUT; ver->out_hdr.sector = page_to_phys(virt_to_page(req->msg)); ver->out_hdr.ioprio = req->len; printk("do_req: pa = %x\n", ver->out_hdr.sector); } /* set out_hdr to vring */ sg_set_buf(&vecho->sg[0], &ver->out_hdr, sizeof(ver->out_hdr)); /* * req body to sg list * TODO : prepare generic api like "blk_rq_map_sg" */ { num = 1; sg_set_buf(&vecho->sg[1], ver->req->msg, ver->req->len); } /* set in_hdr to vring */ sg_set_buf(&vecho->sg[num+1], &ver->status, sizeof(ver->status)); /* * set data direction. * now only T_OUT */ if (!strncmp(ver->req->msg, "int", 3)) { /* * initialize relay */ unsigned int nr_buf = 0, i; struct virtecho_req_rchan *vrr= kzalloc(sizeof(*vrr), GFP_KERNEL); /* FIXME: err check here */ ver->out_hdr.type |= VIRTIO_VSP_T_INIT; /* init relay by probes */ vrr->rchan = relay_init(PAGE_SIZE , 1); strcpy(vrr->mod_name, module_name(THIS_MODULE)); if (vrr->rchan) { for_each_possible_cpu(i) if (vrr->rchan->buf[i]) nr_buf++; ver->out_hdr.ioprio = nr_buf; vrr->nr_rchan_buf = nr_buf; printk("c[%d]:buf_size[%d], nr_subbufs[%d], name[%s]\n", ver->out_hdr.ioprio, vrr->rchan->subbuf_size, vrr->rchan->n_subbufs, vrr->mod_name); } ver->rchan = vrr->rchan; //list_add_tail(&vrr->list, &vecho->req_rchans); vecho->vrr = vrr; printk("[%s] int-rchan[%pF]\n", __FUNCTION__, ver->rchan); #if 0 ver->out_hdr.sector = virt_to_phys(virt_to_page(ver->rchan->buf[0]->start)); #endif sg_set_buf(&vecho->sg[1], vrr->rchan, sizeof(struct rchan)); } else if (!strncmp(ver->req->msg, "end", 3)) { /* * cleanup relay */ ver->out_hdr.type |= VIRTIO_VSP_T_FIN; ver->rchan = vecho->vrr->rchan; printk("[%s] end-rchan[%pF]\n", __FUNCTION__, ver->rchan); /* probe will cleanup */ relay_cleanup(ver->rchan); } else { /* set probe data * */ size_t read_pos, avail, cpu_id; struct page *pg; ver->out_hdr.type |= VIRTIO_VSP_T_DATA; /* set rchan */ ver->rchan = vecho->vrr->rchan; printk("[%s] data-rchan[%pF]\n", __FUNCTION__, ver->rchan); /* in real, some probes relay_write */ relay_write(ver->rchan, ver->req->msg, ver->req->len); /* send index */ /* TODO: mutex before retrieve index */ cpu_id = smp_processor_id(); relay_read_info( ver->rchan->buf[cpu_id], &read_pos, &avail ); printk("read_pos[%u]:avail[%u]\n", read_pos, avail); relay_read_consume(ver->rchan->buf[cpu_id], read_pos, avail); /* set sg * FIXME: page unit */ #if 1 /* vmap address to staright map */ pg = relay_get_page(ver->rchan->buf[cpu_id], ver->rchan->buf[cpu_id]->start, read_pos); sg_set_buf(&vecho->sg[1], phys_to_virt(page_to_phys(pg)) + (read_pos % PAGE_SIZE), avail); #endif /* buf->start from vmap, so above doesnt work */ do { struct page **pga, *pg; int len; char testbuf[128]; len = avail > 127 ? 127: avail; memcpy(testbuf, ver->rchan->buf[cpu_id]->start + read_pos, len); testbuf[len] = '\0'; pga = ver->rchan->buf[cpu_id]->page_array; pg = pga[0]; printk("sg_set_buf[%s] pg-vadr[%pF]\n", testbuf, page_to_phys(pg)); ver->out_hdr.sector = page_to_phys(pg); ver->out_hdr.ioprio = avail; } while (0); } out = 1 + num; in = 1; /* add req to vring */ if (vecho->vq->vq_ops->add_buf(vecho->vq, vecho->sg, out, in, ver)) { mempool_free(ver, vecho->pool); return false; } /* add req to list to resend if failure */ list_add_tail(&ver->list, &vecho->reqs); return true; } static bool do_testload(struct virtio_echo *vecho) { unsigned long num, out, in; struct virtecho_req *ver; ver = mempool_alloc(vecho->pool, GFP_ATOMIC); if (!ver) /* When another request finishes we'll try again. */ return false; //ver->req = req; /* set header */ { ver->out_hdr.type = 0; ver->out_hdr.sector = page_to_phys(virt_to_page(inbuf)); ver->out_hdr.ioprio = (PAGE_SIZE*4); printk("do_testload: pa = %x\n", ver->out_hdr.sector); } /* set out_hdr to vring */ sg_set_buf(&vecho->sg[0], &ver->out_hdr, sizeof(ver->out_hdr)); /* * req body to sg list * TODO : prepare generic api like "blk_rq_map_sg" */ { num = 1; sg_set_buf(&vecho->sg[1], inbuf, sizeof(*inbuf)); } /* set in_hdr to vring */ sg_set_buf(&vecho->sg[num+1], &ver->status, sizeof(ver->status)); /* * set data direction. * now only T_OUT */ { ver->out_hdr.type |= VIRTIO_VSP_T_IN; out = 1; in = 2; } /* add req to vring */ if (vecho->vq->vq_ops->add_buf(vecho->vq, vecho->sg, out, in, ver)) { mempool_free(ver, vecho->pool); return false; } /* add req to list to resend if failure */ list_add_tail(&ver->list, &vecho->reqs); return true; } static void do_virtecho_request(struct virtecho_msg *req, uint32_t type) { struct virtio_echo *vecho = NULL; unsigned int issued = 0; vecho = g_vdev->priv; if (vecho != NULL) { /* If this request fails, stop queue and wait for something to finish to restart it. */ if (type == VIRTIO_VSP_T_OUT) { if (!do_req(vecho, req)) { printk("do_virtecho_req: do_req fail\n"); return ; } } else { printk("testload called\n"); if (!do_testload(vecho)) { printk("do_testload_req: fail\n"); return; } } issued++; } else { printk("do_virtecho_req: vecho not set\n"); return; } if (issued) vecho->vq->vq_ops->kick(vecho->vq); } static int virtvsp_probe(struct virtio_device *vdev) { struct virtio_echo *vecho; int err; u64 cap; u32 sg_elems; printk("virtecho probe\n"); /* We need to know how many segments before we allocate. */ err = virtio_config_val(vdev, VIRTIO_VSP_F_SEG_MAX, offsetof(struct virtio_vsp_config, seg_max), &sg_elems); if (err) sg_elems = 1; /* * Create vecho struct. * We need an extra sg elements at head(req out_hdr) and * tail(rsp in_hdr). */ sg_elems += 2; vdev->priv = vecho = kmalloc( sizeof(*vecho) + sizeof(vecho->sg[0]) * sg_elems, GFP_KERNEL); if (!vecho) { err = -ENOMEM; goto out; } INIT_LIST_HEAD(&vecho->reqs); spin_lock_init(&vecho->lock); vecho->vdev = vdev; vecho->sg_elems = sg_elems; sg_init_table(vecho->sg, vecho->sg_elems); /* working page */ inbuf = kmalloc(PAGE_SIZE*4, GFP_KERNEL); if (!inbuf) { err = -ENOMEM; goto out; } /* * We expect one virtqueue, for output. * attach BH invoked by virtio-pci irq handler. */ vecho->vq = vdev->config->find_vq(vdev, 0, echo_done); if (IS_ERR(vecho->vq)) { err = PTR_ERR(vecho->vq); goto out_free_vecho; } vecho->pool = mempool_create_kmalloc_pool(1, sizeof(struct virtecho_req)); if (!vecho->pool) { err = -ENOMEM; goto out_free_vq; } /* If barriers are supported, tell block layer that queue is ordered */ if (virtio_has_feature(vdev, VIRTIO_VSP_F_GEOMETRY)) printk("virtio-echo: has virtio_vsp_f_geometry\n");; /* Host must always specify the capacity. */ vdev->config->get(vdev, offsetof(struct virtio_vsp_config, capacity), &cap, sizeof(cap)); /* If capacity is too big, truncate with warning. */ if ((sector_t)cap != cap) { dev_warn(&vdev->dev, "Capacity %llu too large: truncating\n", (unsigned long long)cap); cap = (sector_t)-1; } printk("virtio-echo: has %lu capacity\n", cap); /* create kthread */ /* FIXME: hand in vdev to userinterface */ g_vdev = vdev; virtecho_init(vdev); return 0; out_free_vq: vdev->config->del_vq(vecho->vq); out_free_vecho: kfree(vecho); out: return err; } static void virtvsp_remove(struct virtio_device *vdev) { struct virtio_vsp *vecho = vdev->priv; /* stop kthread */ virtecho_exit(vdev); /* Nothing should be pending. */ BUG_ON(!list_empty(&vecho->reqs)); /* Stop all the virtqueues. */ vdev->config->reset(vdev); mempool_destroy(vecho->pool); vdev->config->del_vq(vecho->vq); kfree(vecho); } static struct virtio_device_id id_table[] = { { VIRTIO_ID_VSP, VIRTIO_DEV_ANY_ID }, { 0 }, }; static unsigned int features[] = { VIRTIO_VSP_F_BARRIER, VIRTIO_VSP_F_SEG_MAX, VIRTIO_VSP_F_SIZE_MAX, VIRTIO_VSP_F_GEOMETRY, }; static struct virtio_driver virtio_vsp = { .feature_table = features, .feature_table_size = ARRAY_SIZE(features), .driver.name = KBUILD_MODNAME, .driver.owner = THIS_MODULE, .id_table = id_table, .probe = virtvsp_probe, .remove = __devexit_p(virtvsp_remove), }; /** * user interface. * "/proc/virtio-echo" */ /* vsp interface implementation for virtecho * vsp_relay_export_start (rchan, modname) * vsp_relay_export_end (modname) * vsp_relay_update_index (rchan_buf) * * */ bool vsp_relay_ctrl(struct virtio_echo *vecho, unsigned long cmd) { unsigned long num, out, in; struct virtecho_req *ver; ver = mempool_alloc(vecho->pool, GFP_ATOMIC); if (!ver) /* When another request finishes we'll try again. */ return false; /* set header */ { ver->out_hdr.type = VIRTIO_VSP_T_OUT; } num = 1; /* set out_hdr to vring */ sg_set_buf(&vecho->sg[0], &ver->out_hdr, sizeof(ver->out_hdr)); /* set in_hdr to vring */ sg_set_buf(&vecho->sg[num+1], &ver->status, sizeof(ver->status)); printk("virt-relay-ctrl[%d]\n", cmd); if (cmd == VIRTIO_RELAY_OPEN) { unsigned int nr_buf = 0, i; /* FIXME: err check here */ ver->out_hdr.type |= VIRTIO_VSP_T_INIT; /* init relay by probes */ ver->rchan = vecho->vrr->rchan; //strcpy(vrr->mod_name, modname); if (ver->rchan) { for_each_possible_cpu(i) if (ver->rchan->buf[i]) nr_buf++; ver->out_hdr.ioprio = nr_buf; //vrr->nr_rchan_buf = nr_buf; printk("c[%d]:buf_size[%d], nr_subbufs[%d], name[%s]\n", ver->out_hdr.ioprio, ver->rchan->subbuf_size, ver->rchan->n_subbufs, ver->rchan->base_filename); } //list_add_tail(&vrr->list, &vecho->req_rchans); printk("[%s] int-rchan[%pF]\n", __FUNCTION__, ver->rchan); #if 0 ver->out_hdr.sector = virt_to_phys(virt_to_page(ver->rchan->buf[0]->start)); #endif sg_set_buf(&vecho->sg[1], ver->rchan, sizeof(struct rchan)); } else { ver->out_hdr.type |= VIRTIO_VSP_T_FIN; ver->rchan = vecho->vrr->rchan; sg_set_buf(&vecho->sg[1], ver->rchan, sizeof(struct rchan)); printk("[%s] end-rchan[%pF]\n", __FUNCTION__, ver->rchan); /* probe will do relay_close */ //relay_cleanup(ver->rchan); } out = 1 + num; in = 1; /* add req to vring */ if (vecho->vq->vq_ops->add_buf(vecho->vq, vecho->sg, out, in, ver)) { mempool_free(ver, vecho->pool); printk("[%s] add-buf error\n", __FUNCTION__); return false; } /* add req to list to resend if failure */ list_add_tail(&ver->list, &vecho->reqs); vecho->vq->vq_ops->kick(vecho->vq); return true; } void vsp_relay_write_atomic(void) { printk("[%s] wakeup kthread for update index\n", __FUNCTION__); virtecho_wakeup(); } int vsp_relay_export_start(struct rchan *rchan, const char *name) { bool err = false; struct virtio_echo *vecho = g_vdev->priv; struct virtecho_req_rchan *vrr = kzalloc(sizeof(*vrr), GFP_KERNEL); vrr->rchan = rchan; strcpy(vrr->mod_name, name); vecho->vrr = vrr; err = vsp_relay_ctrl(vecho, VIRTIO_RELAY_OPEN); /* wait for completion */ printk("vsp_relay_export_start: now wait for completion[%d]\n", err); wait_for_completion(&virtecho_completion); return 0; } EXPORT_SYMBOL_GPL(vsp_relay_export_start); void vsp_relay_export_end(const char *name) { struct virtio_echo *vecho = g_vdev->priv; vsp_relay_ctrl(vecho, VIRTIO_RELAY_CLOSE); /* wait for completion and woken up by callback */ printk("vsp_relay_export_end: now wait for completion\n"); wait_for_completion(&virtecho_completion); if (vecho->vrr) kfree(vecho->vrr); vecho->vrr = NULL; } EXPORT_SYMBOL_GPL(vsp_relay_export_end); void vsp_update_host_index(struct rchan_buf *buf) { printk("update called\n"); vsp_relay_write_atomic(); } EXPORT_SYMBOL_GPL(vsp_update_host_index); static struct task_struct *virtvspd; static struct proc_dir_entry *virtio_vsp_proc; int virtio_echo_thread(void *params) { struct virtio_echo *vecho = ((struct virtio_device *)params)->priv; printk("[%s] started\n", current->comm); while (!kthread_should_stop()) { wait_event_interruptible( virtecho_rsp_wq, virtecho_wait_rsps || kthread_should_stop()); if (kthread_should_stop()) break; printk("[%s] wait_rsps = %d\n", __FUNCTION__, virtecho_wait_rsps); virtecho_wait_rsps = 0; /* clear flag *before* checking for work */ smp_mb(); /* do some work */ /* TODO: mutex before retrieve index */ printk("[%s] kthread do some work\n", __FUNCTION__); do { unsigned long out, in; size_t read_pos, avail, cpu_id; struct page *pg; struct virtecho_req *ver = mempool_alloc(vecho->pool, GFP_ATOMIC); if (!ver) /* When another request finishes we'll try again. */ continue; ver->out_hdr.type = (VIRTIO_VSP_T_OUT | VIRTIO_VSP_T_DATA); /* set rchan */ ver->rchan = vecho->vrr->rchan; /* set out_hdr to vring */ sg_set_buf(&vecho->sg[0], &ver->out_hdr, sizeof(ver->out_hdr)); /* set in_hdr to vring */ sg_set_buf(&vecho->sg[2], &ver->status, sizeof(ver->status)); cpu_id = smp_processor_id(); relay_read_info( ver->rchan->buf[cpu_id], &read_pos, &avail ); printk("read_pos[%u]:avail[%u]\n", read_pos, avail); relay_read_consume(ver->rchan->buf[cpu_id], read_pos, avail); /* set sg * FIXME: page unit */ /* vmap address to staright map */ pg = relay_get_page(ver->rchan->buf[cpu_id], ver->rchan->buf[cpu_id]->start, read_pos); sg_set_buf(&vecho->sg[1], phys_to_virt(page_to_phys(pg)) + (read_pos % PAGE_SIZE), avail); out = 2; in = 1; /* add req to vring */ if (vecho->vq->vq_ops->add_buf(vecho->vq, vecho->sg, out, in, ver)) { printk("[%s] add_buf error\n", __FUNCTION__); mempool_free(ver, vecho->pool); continue; //return false; } /* add req to list to resend if failure */ list_add_tail(&ver->list, &vecho->reqs); vecho->vq->vq_ops->kick(vecho->vq); } while (0); } printk("[%s] exiting\n", current->comm); virtvspd = NULL; return 0; } static void virtecho_wakeup(void) { virtecho_wait_rsps = 1; wake_up(&virtecho_rsp_wq); } /* vsp interface */ static ssize_t virtvsp_dev_write(struct file *filp, const char __user *ubuf, size_t len, loff_t *ppos) { int err; struct virtecho_msg *req = kzalloc(sizeof(*req), GFP_KERNEL); if (IS_ERR(req)) { printk("virtio_edev_write: error kzalloc\n"); return PTR_ERR(req); } req->msg = kzalloc(PAGE_SIZE, GFP_KERNEL); if (IS_ERR(req->msg)) { printk("virtio_dev_write: kzalloc msg error\n"); err = PTR_ERR(req->msg); goto out; } /* call do_echo_req */ if (len >= PAGE_SIZE) { printk("virtio_edev_write: error oversize msg\n"); kfree(req->msg); err = -EFAULT; goto out; } err = copy_from_user(req->msg, ubuf, len); req->len = len; /* just for test */ copy_from_user(inbuf, ubuf, len); do_virtecho_request(req, VIRTIO_VSP_T_OUT); #if 1 printk("virtecho_write: now wait for completion\n"); wait_for_completion(&virtecho_completion); #endif kfree(req->msg); out: kfree(req); return err; } static ssize_t virtvsp_dev_read(struct file *filp, char __user *ubuf, size_t len, loff_t *ppos) { int err; do_virtecho_request(NULL, VIRTIO_VSP_T_IN); #if 1 printk("virtecho_write: now wait for completion\n"); wait_for_completion(&virtecho_completion); #endif printk("wakeuped ret-[%u]\n", inbuf_len); err = copy_to_user(ubuf, inbuf, inbuf_len); inbuf[inbuf_len] = '\0'; printk("virtecho_dev_read:[%s]\n", inbuf); return inbuf_len; } static int virtvsp_dev_open(struct inode *inode, struct file *filp) { return 0; } static int virtvsp_dev_release(struct inode *inode, struct file *filp) { return 0; } static int virtvsp_dev_ioctl(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg) { return 0; } /* interface with userland */ static const struct file_operations virtvsp_dev_fops = { .read = virtvsp_dev_read, .write = virtvsp_dev_write, .ioctl = virtvsp_dev_ioctl, .open = virtvsp_dev_open, .release = virtvsp_dev_release, }; static int virtvsp_init(struct virtio_device *vdev) { init_waitqueue_head(&virtecho_rsp_wq); virtecho_wait_rsps = 0; virtvspd = kthread_run(virtio_echo_thread, vdev, "virt-vspd"); if (IS_ERR(virtvspd)) { virtvspd = NULL; printk("err: start virtvspd\n"); return PTR_ERR(virtvspd); } /* make interface to user */ virtio_vsp_proc = create_proc_entry("virtio-vsp", S_IFREG | S_IRUGO | S_IWUGO, NULL); if (virtio_vsp_proc) { virtio_vsp_proc->proc_fops = &virtvsp_dev_fops; virtio_vsp_proc->data = vdev; } else return -EFAULT; return 0; } static int virtvsp_exit(struct virtio_device *vdev) { if (virtvspd) kthread_stop(virtvspd); /* clear user interface */ if (virtio_vsp_proc) remove_proc_entry("virtio-vsp", NULL); return 0; } /* moudle initialize and fin */ static int __init init(void) { printk("%s(v%s): LOADED\n", MODNAME, VERSION); return register_virtio_driver(&virtio_vsp); } static void __exit fini(void) { unregister_virtio_driver(&virtio_vsp); } module_init(init); module_exit(fini); MODULE_DEVICE_TABLE(virtio, id_table); MODULE_DESCRIPTION("Virtio guest vsp driver"); MODULE_LICENSE("GPL"); -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: 金 成昊 <sun...@hi...> - 2010-04-26 00:33:39
|
Dear Vigneswaran-san, Here is guest-part driver. vesper for kvm consits of; (1) guest part driver - the same role fo xen guest driver (2) host part driver - provide user interface for qemu to write probe data to host relay -write probe data to host relay (3) qemu part driver - the similar role for xen host driver but - it should write probe data to host part driver that cause copy overhead - vbus patch tackles that problem The other 2 parts will be uploaded when they are ready. I will work for that during the long vacation beginnig from 28th April. I hope all parts will be uploaded util the end of the vacation. Please keep contact with me by vesper mailing list. Thank you. Best Regards, Kim ------------------------------------------------------------------- file: virtio_vsp.h description: user interface for probes ------------------------------------------------------------------- #ifndef _VIRTIO_RELAY_H #define _VIRTIO_RELAY_H #include <linux/relay.h> #define VIRTIO_RELAY_OPEN 1 #define VIRTIO_RELAY_WRITE 2 #define VIRTIO_RELAY_CLOSE 4 extern int vsp_relay_export_start(struct rchan *, const char *); extern void vsp_relay_export_end(const char *); extern void vsp_update_host_index(struct rchan_buf *); extern struct rchan *relay_init(size_t subbuf_size, size_t nr_subbufs); extern void relay_cleanup(struct rchan *rchan); extern int relay_read_info(struct rchan_buf *, size_t *, size_t *); extern void relay_read_consume(struct rchan_buf *, size_t, size_t); extern struct page *relay_get_page(struct rchan_buf *buf, void *start, size_t read_pos); #endif ------------------------------------------------------------------- file: virtio_vsp.h description: definition of vsp device as the virtio device ------------------------------------------------------------------- /* * Virtio VSP Device * * Copyright Hitachi, Corp. 2009 * * Authors: * Sungho Kim <sun...@hi...> * * This work is licensed under the terms of the GNU GPL, version 2. See * the COPYING file in the top-level directory. * */ #ifndef _QEMU_VIRTIO_VSP_H #define _QEMU_VIRTIO_VSP_H #include <linux/types.h> #include <linux/virtio_config.h> #include <linux/relay.h> /* from Linux's linux/virtio_blk.h */ /* The ID for virtio_echo */ #define VIRTIO_ID_VSP 9 /* echo config magic */ #define VIRTIO_VSP_MAGIC 10 /* Feature bits */ #define VIRTIO_VSP_F_BARRIER 0 /* Does host support barriers? */ #define VIRTIO_VSP_F_SIZE_MAX 1 /* Indicates maximum segment size */ #define VIRTIO_VSP_F_SEG_MAX 2 /* Indicates maximum # of segments */ #define VIRTIO_VSP_F_GEOMETRY 4 /* Indicates support of legacy geometry */ struct virtio_vsp_config { uint64_t capacity; uint32_t size_max; uint32_t seg_max; uint16_t cylinders; uint8_t heads; uint8_t sectors; } __attribute__((packed)); /* These two define direction. */ #define VIRTIO_VSP_T_IN 1 #define VIRTIO_VSP_T_OUT 2 #define VIRTIO_VSP_T_INIT 4 #define VIRTIO_VSP_T_DATA 8 #define VIRTIO_VSP_T_FIN 16 /* This bit says it's a scsi command, not an actual read or write. */ #define VIRTIO_VSP_T_SCSI_CMD 2 /* Barrier before this op. */ #define VIRTIO_VSP_T_BARRIER 0x80000000 /* This is the first element of the read scatter-gather list. */ struct virtio_echo_outhdr { /* VIRTIO_BLK_T* */ uint32_t type; /* io priority. */ uint32_t ioprio; /* Sector (ie. 512 byte offset) */ uint64_t sector; }; #define VIRTIO_ECHO_S_OK 0 #define VIRTIO_ECHO_S_IOERR 1 #define VIRTIO_ECHO_S_UNSUPP 2 /* This is the first element of the write scatter-gather list */ struct virtio_echo_inhdr { unsigned char status; }; /***************************************************************/ /* include/linux/relay.h */ struct virtecho_rchan { u32 version; size_t subbuf_size; size_t n_subbufs; size_t alloc_size; void *private_data; size_t last_toobig; int is_global; char base_filename[NAME_MAX]; }; /***************************************************************/ #endif -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2010-04-08 04:08:55
|
Dear Kim-san, Thank you for coming to India for the F2F meeting. It went fine. I just thought of reminding you about sharing of Vesper code with KVM support. If it is ready, please share the code with us, we will update Cimha accordingly and create the live CD. Thank you. Best Regards, Vignesh |
From: 金 成昊 <sun...@hi...> - 2010-04-08 03:59:51
|
Dear Vigneswaran-san, Thank you for your remind. > I just thought of reminding you about sharing of Vesper code with KVM > support. If it is ready, please share the code with us, we will update > Cimha accordingly and create the live CD. I will happily let you know if it is ready. Best regards, Kim > Dear Kim-san, > > Thank you for coming to India for the F2F meeting. It went fine. > > I just thought of reminding you about sharing of Vesper code with KVM > support. If it is ready, please share the code with us, we will update > Cimha accordingly and create the live CD. > > Thank you. > > Best Regards, > Vignesh > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: 金 成昊 <sun...@hi...> - 2010-02-18 00:48:13
|
Dear Vigneswaran-san, > We added an image consisting of the overview of Cimha under the > directory "cimha_src". The image is self explanatory and gives the > interaction between various components of Cimha. Awesome!! Thank you. That gave me a clear picture. Good painting!! I will try it and back to you. Thank you again. Regards, Kim Vigneswaran R さんは書きました: > Dear Kim-san, > > 金 成昊 wrote: >> Dear Vigneswaran-san, >> >> I am trying to integrate CIMHA works with VESPER. >> please give me a big picture of CIMHA structure and >> how CIMHA components works together. >> >> Here what I did. >> 1. patch /cimha/cimha_src/patches to vsp0.0.2 >> 2. run vsp0.0.2 >> 3. compile chm_proxyd(without ssl) and deploy to VM >> 4. install ra script to host /usr/share/where-to-heartbeat-ocf-rsc/ >> 5. start heartbeat-gui and register the ra as resource >> 6. with cimha-gui, select probe and add >> >> please give me best practice of cimha with ke. > > Sorry, as I caught up with some work, I couldn't reply quickly. > > The steps you have mentioned are correct. In addition to that, now > another resource agent called ke_ra is also available. This helps > heartbeat to communicate with KE for analysing and getting the guest > kernel's status. > > We added an image consisting of the overview of Cimha under the > directory "cimha_src". The image is self explanatory and gives the > interaction between various components of Cimha. > > Also, we added a file called INSTALL to give information about what > are all the Cimha components need to be installed, and how to use > them. The latest code is available in git repository of sf.net. > > Please let me know, if you have any doubts. > > Regards, > Vignesh > > >> Thank you in advance. >> Regards, >> Kim >> >> >> Vigneswaran R さんは書きました: >>> Dear Kim-san, >>> >>> We added code to display probe output in Cimha GUI. Also, we committed >>> the code to sf.net git repository. Please try it and give us your >>> feedback. >>> >>> Usage: >>> >>> 1. In the Cimha GUI left navigation bar, click on any guest and insert >>> probes using the probes GUI. >>> >>> 2. The probe outputs will be shown in the Information tab >>> corresponding to the guest as rows of the following format >>> >>> "probe_name1" "ouput" >>> "probe_name2" "ouput" >>> ... >>> >>> 3. It will show only the last output from all the inserted probes >>> (ie., no history). >>> >>> 4. The GUI will be updated once in two seconds (as with the case of >>> showing Memory and CPU usage in the Summary tab). >>> >>> Thank you. >>> >>> Regards, >>> Vignesh >>> >> >> > > > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2010-02-17 14:54:30
|
Dear Kim-san, 金 成昊 wrote: > Dear Vigneswaran-san, > > I am trying to integrate CIMHA works with VESPER. > please give me a big picture of CIMHA structure and > how CIMHA components works together. > > Here what I did. > 1. patch /cimha/cimha_src/patches to vsp0.0.2 > 2. run vsp0.0.2 > 3. compile chm_proxyd(without ssl) and deploy to VM > 4. install ra script to host /usr/share/where-to-heartbeat-ocf-rsc/ > 5. start heartbeat-gui and register the ra as resource > 6. with cimha-gui, select probe and add > > please give me best practice of cimha with ke. Sorry, as I caught up with some work, I couldn't reply quickly. The steps you have mentioned are correct. In addition to that, now another resource agent called ke_ra is also available. This helps heartbeat to communicate with KE for analysing and getting the guest kernel's status. We added an image consisting of the overview of Cimha under the directory "cimha_src". The image is self explanatory and gives the interaction between various components of Cimha. Also, we added a file called INSTALL to give information about what are all the Cimha components need to be installed, and how to use them. The latest code is available in git repository of sf.net. Please let me know, if you have any doubts. Regards, Vignesh > Thank you in advance. > Regards, > Kim > > > Vigneswaran R さんは書きました: >> Dear Kim-san, >> >> We added code to display probe output in Cimha GUI. Also, we committed >> the code to sf.net git repository. Please try it and give us your >> feedback. >> >> Usage: >> >> 1. In the Cimha GUI left navigation bar, click on any guest and insert >> probes using the probes GUI. >> >> 2. The probe outputs will be shown in the Information tab >> corresponding to the guest as rows of the following format >> >> "probe_name1" "ouput" >> "probe_name2" "ouput" >> ... >> >> 3. It will show only the last output from all the inserted probes >> (ie., no history). >> >> 4. The GUI will be updated once in two seconds (as with the case of >> showing Memory and CPU usage in the Summary tab). >> >> Thank you. >> >> Regards, >> Vignesh >> > > |
From: 金 成昊 <sun...@hi...> - 2010-02-12 06:17:08
|
Dear Vigneswaran-san, I am trying to integrate CIMHA works with VESPER. please give me a big picture of CIMHA structure and how CIMHA components works together. Here what I did. 1. patch /cimha/cimha_src/patches to vsp0.0.2 2. run vsp0.0.2 3. compile chm_proxyd(without ssl) and deploy to VM 4. install ra script to host /usr/share/where-to-heartbeat-ocf-rsc/ 5. start heartbeat-gui and register the ra as resource 6. with cimha-gui, select probe and add please give me best practice of cimha with ke. Thank you in advance. Regards, Kim Vigneswaran R さんは書きました: > Dear Kim-san, > > We added code to display probe output in Cimha GUI. Also, we committed > the code to sf.net git repository. Please try it and give us your > feedback. > > Usage: > > 1. In the Cimha GUI left navigation bar, click on any guest and insert > probes using the probes GUI. > > 2. The probe outputs will be shown in the Information tab > corresponding to the guest as rows of the following format > > "probe_name1" "ouput" > "probe_name2" "ouput" > ... > > 3. It will show only the last output from all the inserted probes > (ie., no history). > > 4. The GUI will be updated once in two seconds (as with the case of > showing Memory and CPU usage in the Summary tab). > > Thank you. > > Regards, > Vignesh > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2010-02-04 08:47:31
|
Dear Kim-san, We added code to display probe output in Cimha GUI. Also, we committed the code to sf.net git repository. Please try it and give us your feedback. Usage: 1. In the Cimha GUI left navigation bar, click on any guest and insert probes using the probes GUI. 2. The probe outputs will be shown in the Information tab corresponding to the guest as rows of the following format "probe_name1" "ouput" "probe_name2" "ouput" ... 3. It will show only the last output from all the inserted probes (ie., no history). 4. The GUI will be updated once in two seconds (as with the case of showing Memory and CPU usage in the Summary tab). Thank you. Regards, Vignesh |
From: Vigneswaran R <vi...@at...> - 2010-01-12 08:59:26
|
Dear Kim-san, 金 成昊 wrote: > Dear Vigneswaran-san, >> Everything went fine. But, when the handle_events function >> (vsp-appd.c) tried to call the handler, it gave segmentation fault. >> This is due to the fact that, the function address is not valid in >> vsp-appd's memory space. Because, the handler is defined inside >> cimha_agent daemon's memory space. So, we created the second patch to >> get more probe data through the socket itself. > vsp-appd daemon and cimha daemon are different user program from the > view of kernel. > which means vsp-appd and cimha has its own address space. so that, > the address of the handler in cimha space is not avalable to vsp-appd > space. > That results in "Segmentaion Fault". > The first draft of vsp-appd came in with plug-in system like PILS in > Heartbeat which > is acually shared library things. > User registers her/his own handler with plug-in library then vsp-appd > can access to the handler. > As I mentioned in the last mail, it seems heavier for this term > development. > # May be we can utilize PILS of Heartbeat in vsp-appd as well. Ok. For this term, we will use the socket to get the probe information. May be in the next term, once you implement the plugin architecture, we will make use of it. Thank you for your clarification. Best Regards, Vignesh >> Dear Kim-san, >> >> 金 成昊 wrote: >>> Dear Vigneswaran-san, >>> >>>> Your comments are welcome. Thank you. >>> No problem for your commit. >>> Just some comments; >>> In our intention with "int (*handle_input)(int fd, struct probe *me)" , >>> let user add handle_input per probe. >>> Modifying "vsp_add_probe" in vsp-lib.c seems a little heavier for this >>> term of the project IMHO. >>> That's why your commit looks appropriate. >> Thank you. >> >> Before creating the second patch, actually we tried modifying the >> function vsp_add_probe inorder to make use of the (*handle_input). We >> did the following things, >> >> 1. We defined a probe output handler function in Cimha Agent daemon. >> 2. In the vsp_msg_strcut structure, we added one more (long int) field >> to hold the address of the probe output handler function (type casted >> into long int). >> 3. Modified the vsp_add_probe function to take one more (long int) >> argument. While invoking that function, Cimha agent will provide the >> address of the handler function through this argument. >> 4. In _send_add_req function we assigned this address to the >> appropriate field in the vsp_msg_struct structure. >> 5. We modified the __add_probe function in vsp-appd.c to typecast the >> address back to the appropriate function pointer and set it as the >> handler_input for that probe. >> >> Everything went fine. But, when the handle_events function >> (vsp-appd.c) tried to call the handler, it gave segmentation fault. >> This is due to the fact that, the function address is not valid in >> vsp-appd's memory space. Because, the handler is defined inside >> cimha_agent daemon's memory space. So, we created the second patch to >> get more probe data through the socket itself. >> >> Note: Typecasting the function address into "long int" and back to >> function pointer worked fine as long as the function is part of the >> same program which invokes it. >> >> Please pour your comments on this. >> >> Forgot to mention in my previous mail that in the last commit, we have >> added two more probes cpu_usage.c (% of CPU usage) and process_usage.c >> (% of processes). As most of the integration work is over, we are >> concentrating on creating more probes and building the knowledge base. >> >> Thank you. >> >> Regards, >> Vignesh >> >> >>>> Dear Kim-san, >>>> >>>> We have made another commit to sf.net git repository which adds a >>>> patch to Vesper-0.0.2, in addition to some changes to our code. >>>> >>>> Totally, we have created two patches. You can find them under, >>>> >>>> $(PATH_TO_CIMHA)/cimha_src/patches >>>> >>>> Both the patches are independent and no common files will be affected. >>>> Currently, the cimha components are based on the assumption that the >>>> patch *vesper_struct_probedata-0.0.2.patch is applied* to >>>> Vesper-0.0.2. The other patch can be ignored (may be I will remove it >>>> from the repository eventually, if it is not needed). >>>> >>>> Hereby, I describe about both the patches for completeness. Please >>>> review them and give us your feedback. >>>> >>>> Patch #1: vesper_wait_events-0.0.2.patch (can be ignored later) >>>> >>>> change #1: removal of break statement from the end of the for loop in >>>> vsp_wait_events function. >>>> >>>> This change makes the function to keep listening to the RA socket. >>>> This helps us to collect probe output continuously. >>>> >>>> change #2: writing back the event handler's return value to the RA >>>> socket. >>>> >>>> RA socket will be used by Vesper as well as Cimha. The vsp-appd writes >>>> probe output to the RA socket. In such case, the event handler's >>>> return value is of no importance. >>>> >>>> In case of Cimha, grab probe utility (which invokes vsp_wait_events, >>>> and defines the event handler) uses RA socket for communicating with >>>> its client (Heartbeat) to provide the kernel helath assessed by KE >>>> based on the collected probe output. In such case, the event handler's >>>> return value is the KE's return value. >>>> >>>> advantages: this patch enable us to collect probe output continuously >>>> and use RA socket to communicate with clients (in both directions). >>>> >>>> disadvantages: this patch is not sufficient to get additional probe >>>> information like guest id and probe name which gave the output. >>>> >>>> Patch #2: vesper_struct_probedata-0.0.2.patch >>>> >>>> Objective: get more information about the probe output >>>> >>>> To get more information about the probe output, we created a structure >>>> called probe_data in vsp-lib.h which has guest id, probe path and >>>> probe output in it. This structure will be written into the RA socket >>>> by vsp-appd instead of only probe output. >>>> >>>> Changes to Cimha in this commit: >>>> -------------------------------- >>>> We made cimha agent as the single entity to collect the probe output >>>> from vsp-appd and to distribute to all the other Cimha components >>>> (whoever in need). It creates and listens to the RA socket directly >>>> (not using vsp_wait_events). Whenever there is some data available on >>>> RA socket, it collects and writes it into another predefined socket >>>> called probe data socket (created by grab probe utility - a cimha >>>> component), if available. >>>> >>>> TODO: In addition to the above, cimha agent will buffer the latest >>>> (few) output(s) from all the probes which can be then be accessed by >>>> Cimha GUI for displaying purposes. >>>> >>>> Your comments are welcome. Thank you. >>>> >>>> Best Regards, >>>> Vignesh >>>> >>>> >>>> 金 成昊 wrote: >>>>> Dear Vigneswaran-san, >>>>> >>>>> Thank you very much for your commit. >>>>> >>>>> Regards, >>>>> Kim >>>>> >>>>>> Dear Kim-san, >>>>>> >>>>>> We have committed our CIMHA code to sf.net git repository. Please >>>>>> pull >>>>>> it and test it. Git repository information is available under >>>>>> <http://sourceforge.net/projects/cimha/develop>. >>>>>> >>>>>> In the cimha-convirt directory, all the *cimha* components are moved >>>>>> inside cimha_src subdirectory. >>>>>> >>>>>> Additions >>>>>> --------- >>>>>> >>>>>> 1. KE as a daemon >>>>>> 2. TL library >>>>>> 3. probes (as of now, only probes/mem_usage.c probe is usable. we >>>>>> will >>>>>> commit some more probes soon) >>>>>> 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe >>>>>> output >>>>>> 5. utilities to interact with vsp-appd to collect probe output and to >>>>>> communicate with KE for taking decision. the grab_probe_output >>>>>> utility >>>>>> uses "crm_resource" command to initiate migration based on events >>>>>> from >>>>>> probes like apache_monitor (event driven). >>>>>> >>>>>> Please see the README files for more details. Install the components >>>>>> by following the instructions in them. >>>>>> >>>>>> Test Environment >>>>>> ---------------- >>>>>> 1. One VM was running on each active and standby node >>>>>> 2. Vesper was configured (drivers loaded etc.) >>>>>> 3. vsp-appd was started >>>>>> 4. Cimha components cimha agent, cmh_ke were started >>>>>> 5. Cimha component ke_ra (OCF script) was configured to heartbeat >>>>>> as a >>>>>> resource which starts the other utilities to communicate with >>>>>> vsp-appd >>>>>> and KE >>>>>> 6. Apache service inside VM was configured as a resource to Heartbeat >>>>>> through cmh_ra >>>>>> 7. a modified version of apache_monitor probe was inserted into guest >>>>>> using cmh-tool (modified the output format, see the probes/README >>>>>> file) >>>>>> 8. mem_usage probe was inserted into guest (outputs % of memory usage >>>>>> periodically) >>>>>> >>>>>> Testing >>>>>> ------- >>>>>> 1. killing of apache process inside guest immediately triggers >>>>>> migration of resources (type: event driven) >>>>>> 2. increasing the memory usage beyond the threshold (95%) >>>>>> specified in >>>>>> the KE's rules_file also caused the migration (type: polling) >>>>>> >>>>>> We have given detailed information in the README files for all the >>>>>> components. Please go through them. >>>>>> >>>>>> TODOs >>>>>> ----- >>>>>> 1. Maintaining the inserted probes information in a file (on a shared >>>>>> disk) so that it can be inserted into the VM in the standby node, on >>>>>> migration. >>>>>> 2. Writing probes and KE rules to improve the KE rulebase. >>>>>> >>>>>> If you have any doubts, please let us know. >>>>>> >>>>>> >>>>>> Regards, >>>>>> vignesh >>>>>> >>>> >>> >> > > |
From: 金 成昊 <sun...@hi...> - 2010-01-12 08:38:53
|
Dear Vigneswaran-san, > Everything went fine. But, when the handle_events function > (vsp-appd.c) tried to call the handler, it gave segmentation fault. > This is due to the fact that, the function address is not valid in > vsp-appd's memory space. Because, the handler is defined inside > cimha_agent daemon's memory space. So, we created the second patch to > get more probe data through the socket itself. vsp-appd daemon and cimha daemon are different user program from the view of kernel. which means vsp-appd and cimha has its own address space. so that, the address of the handler in cimha space is not avalable to vsp-appd space. That results in "Segmentaion Fault". The first draft of vsp-appd came in with plug-in system like PILS in Heartbeat which is acually shared library things. User registers her/his own handler with plug-in library then vsp-appd can access to the handler. As I mentioned in the last mail, it seems heavier for this term development. # May be we can utilize PILS of Heartbeat in vsp-appd as well. Best Regards, Kim > Dear Kim-san, > > 金 成昊 wrote: >> Dear Vigneswaran-san, >> >>> Your comments are welcome. Thank you. >> No problem for your commit. >> Just some comments; >> In our intention with "int (*handle_input)(int fd, struct probe *me)" , >> let user add handle_input per probe. >> Modifying "vsp_add_probe" in vsp-lib.c seems a little heavier for this >> term of the project IMHO. >> That's why your commit looks appropriate. > > Thank you. > > Before creating the second patch, actually we tried modifying the > function vsp_add_probe inorder to make use of the (*handle_input). We > did the following things, > > 1. We defined a probe output handler function in Cimha Agent daemon. > 2. In the vsp_msg_strcut structure, we added one more (long int) field > to hold the address of the probe output handler function (type casted > into long int). > 3. Modified the vsp_add_probe function to take one more (long int) > argument. While invoking that function, Cimha agent will provide the > address of the handler function through this argument. > 4. In _send_add_req function we assigned this address to the > appropriate field in the vsp_msg_struct structure. > 5. We modified the __add_probe function in vsp-appd.c to typecast the > address back to the appropriate function pointer and set it as the > handler_input for that probe. > > Everything went fine. But, when the handle_events function > (vsp-appd.c) tried to call the handler, it gave segmentation fault. > This is due to the fact that, the function address is not valid in > vsp-appd's memory space. Because, the handler is defined inside > cimha_agent daemon's memory space. So, we created the second patch to > get more probe data through the socket itself. > > Note: Typecasting the function address into "long int" and back to > function pointer worked fine as long as the function is part of the > same program which invokes it. > > Please pour your comments on this. > > Forgot to mention in my previous mail that in the last commit, we have > added two more probes cpu_usage.c (% of CPU usage) and process_usage.c > (% of processes). As most of the integration work is over, we are > concentrating on creating more probes and building the knowledge base. > > Thank you. > > Regards, > Vignesh > > >>> Dear Kim-san, >>> >>> We have made another commit to sf.net git repository which adds a >>> patch to Vesper-0.0.2, in addition to some changes to our code. >>> >>> Totally, we have created two patches. You can find them under, >>> >>> $(PATH_TO_CIMHA)/cimha_src/patches >>> >>> Both the patches are independent and no common files will be affected. >>> Currently, the cimha components are based on the assumption that the >>> patch *vesper_struct_probedata-0.0.2.patch is applied* to >>> Vesper-0.0.2. The other patch can be ignored (may be I will remove it >>> from the repository eventually, if it is not needed). >>> >>> Hereby, I describe about both the patches for completeness. Please >>> review them and give us your feedback. >>> >>> Patch #1: vesper_wait_events-0.0.2.patch (can be ignored later) >>> >>> change #1: removal of break statement from the end of the for loop in >>> vsp_wait_events function. >>> >>> This change makes the function to keep listening to the RA socket. >>> This helps us to collect probe output continuously. >>> >>> change #2: writing back the event handler's return value to the RA >>> socket. >>> >>> RA socket will be used by Vesper as well as Cimha. The vsp-appd writes >>> probe output to the RA socket. In such case, the event handler's >>> return value is of no importance. >>> >>> In case of Cimha, grab probe utility (which invokes vsp_wait_events, >>> and defines the event handler) uses RA socket for communicating with >>> its client (Heartbeat) to provide the kernel helath assessed by KE >>> based on the collected probe output. In such case, the event handler's >>> return value is the KE's return value. >>> >>> advantages: this patch enable us to collect probe output continuously >>> and use RA socket to communicate with clients (in both directions). >>> >>> disadvantages: this patch is not sufficient to get additional probe >>> information like guest id and probe name which gave the output. >>> >>> Patch #2: vesper_struct_probedata-0.0.2.patch >>> >>> Objective: get more information about the probe output >>> >>> To get more information about the probe output, we created a structure >>> called probe_data in vsp-lib.h which has guest id, probe path and >>> probe output in it. This structure will be written into the RA socket >>> by vsp-appd instead of only probe output. >>> >>> Changes to Cimha in this commit: >>> -------------------------------- >>> We made cimha agent as the single entity to collect the probe output >>> from vsp-appd and to distribute to all the other Cimha components >>> (whoever in need). It creates and listens to the RA socket directly >>> (not using vsp_wait_events). Whenever there is some data available on >>> RA socket, it collects and writes it into another predefined socket >>> called probe data socket (created by grab probe utility - a cimha >>> component), if available. >>> >>> TODO: In addition to the above, cimha agent will buffer the latest >>> (few) output(s) from all the probes which can be then be accessed by >>> Cimha GUI for displaying purposes. >>> >>> Your comments are welcome. Thank you. >>> >>> Best Regards, >>> Vignesh >>> >>> >>> 金 成昊 wrote: >>>> Dear Vigneswaran-san, >>>> >>>> Thank you very much for your commit. >>>> >>>> Regards, >>>> Kim >>>> >>>>> Dear Kim-san, >>>>> >>>>> We have committed our CIMHA code to sf.net git repository. Please >>>>> pull >>>>> it and test it. Git repository information is available under >>>>> <http://sourceforge.net/projects/cimha/develop>. >>>>> >>>>> In the cimha-convirt directory, all the *cimha* components are moved >>>>> inside cimha_src subdirectory. >>>>> >>>>> Additions >>>>> --------- >>>>> >>>>> 1. KE as a daemon >>>>> 2. TL library >>>>> 3. probes (as of now, only probes/mem_usage.c probe is usable. we >>>>> will >>>>> commit some more probes soon) >>>>> 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe >>>>> output >>>>> 5. utilities to interact with vsp-appd to collect probe output and to >>>>> communicate with KE for taking decision. the grab_probe_output >>>>> utility >>>>> uses "crm_resource" command to initiate migration based on events >>>>> from >>>>> probes like apache_monitor (event driven). >>>>> >>>>> Please see the README files for more details. Install the components >>>>> by following the instructions in them. >>>>> >>>>> Test Environment >>>>> ---------------- >>>>> 1. One VM was running on each active and standby node >>>>> 2. Vesper was configured (drivers loaded etc.) >>>>> 3. vsp-appd was started >>>>> 4. Cimha components cimha agent, cmh_ke were started >>>>> 5. Cimha component ke_ra (OCF script) was configured to heartbeat >>>>> as a >>>>> resource which starts the other utilities to communicate with >>>>> vsp-appd >>>>> and KE >>>>> 6. Apache service inside VM was configured as a resource to Heartbeat >>>>> through cmh_ra >>>>> 7. a modified version of apache_monitor probe was inserted into guest >>>>> using cmh-tool (modified the output format, see the probes/README >>>>> file) >>>>> 8. mem_usage probe was inserted into guest (outputs % of memory usage >>>>> periodically) >>>>> >>>>> Testing >>>>> ------- >>>>> 1. killing of apache process inside guest immediately triggers >>>>> migration of resources (type: event driven) >>>>> 2. increasing the memory usage beyond the threshold (95%) >>>>> specified in >>>>> the KE's rules_file also caused the migration (type: polling) >>>>> >>>>> We have given detailed information in the README files for all the >>>>> components. Please go through them. >>>>> >>>>> TODOs >>>>> ----- >>>>> 1. Maintaining the inserted probes information in a file (on a shared >>>>> disk) so that it can be inserted into the VM in the standby node, on >>>>> migration. >>>>> 2. Writing probes and KE rules to improve the KE rulebase. >>>>> >>>>> If you have any doubts, please let us know. >>>>> >>>>> >>>>> Regards, >>>>> vignesh >>>>> >>>> >>> >>> >> >> > > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2010-01-12 06:24:35
|
Dear Kim-san, 金 成昊 wrote: > Dear Vigneswaran-san, > >> Your comments are welcome. Thank you. > No problem for your commit. > Just some comments; > In our intention with "int (*handle_input)(int fd, struct probe *me)" , > let user add handle_input per probe. > Modifying "vsp_add_probe" in vsp-lib.c seems a little heavier for this > term of the project IMHO. > That's why your commit looks appropriate. Thank you. Before creating the second patch, actually we tried modifying the function vsp_add_probe inorder to make use of the (*handle_input). We did the following things, 1. We defined a probe output handler function in Cimha Agent daemon. 2. In the vsp_msg_strcut structure, we added one more (long int) field to hold the address of the probe output handler function (type casted into long int). 3. Modified the vsp_add_probe function to take one more (long int) argument. While invoking that function, Cimha agent will provide the address of the handler function through this argument. 4. In _send_add_req function we assigned this address to the appropriate field in the vsp_msg_struct structure. 5. We modified the __add_probe function in vsp-appd.c to typecast the address back to the appropriate function pointer and set it as the handler_input for that probe. Everything went fine. But, when the handle_events function (vsp-appd.c) tried to call the handler, it gave segmentation fault. This is due to the fact that, the function address is not valid in vsp-appd's memory space. Because, the handler is defined inside cimha_agent daemon's memory space. So, we created the second patch to get more probe data through the socket itself. Note: Typecasting the function address into "long int" and back to function pointer worked fine as long as the function is part of the same program which invokes it. Please pour your comments on this. Forgot to mention in my previous mail that in the last commit, we have added two more probes cpu_usage.c (% of CPU usage) and process_usage.c (% of processes). As most of the integration work is over, we are concentrating on creating more probes and building the knowledge base. Thank you. Regards, Vignesh >> Dear Kim-san, >> >> We have made another commit to sf.net git repository which adds a >> patch to Vesper-0.0.2, in addition to some changes to our code. >> >> Totally, we have created two patches. You can find them under, >> >> $(PATH_TO_CIMHA)/cimha_src/patches >> >> Both the patches are independent and no common files will be affected. >> Currently, the cimha components are based on the assumption that the >> patch *vesper_struct_probedata-0.0.2.patch is applied* to >> Vesper-0.0.2. The other patch can be ignored (may be I will remove it >> from the repository eventually, if it is not needed). >> >> Hereby, I describe about both the patches for completeness. Please >> review them and give us your feedback. >> >> Patch #1: vesper_wait_events-0.0.2.patch (can be ignored later) >> >> change #1: removal of break statement from the end of the for loop in >> vsp_wait_events function. >> >> This change makes the function to keep listening to the RA socket. >> This helps us to collect probe output continuously. >> >> change #2: writing back the event handler's return value to the RA >> socket. >> >> RA socket will be used by Vesper as well as Cimha. The vsp-appd writes >> probe output to the RA socket. In such case, the event handler's >> return value is of no importance. >> >> In case of Cimha, grab probe utility (which invokes vsp_wait_events, >> and defines the event handler) uses RA socket for communicating with >> its client (Heartbeat) to provide the kernel helath assessed by KE >> based on the collected probe output. In such case, the event handler's >> return value is the KE's return value. >> >> advantages: this patch enable us to collect probe output continuously >> and use RA socket to communicate with clients (in both directions). >> >> disadvantages: this patch is not sufficient to get additional probe >> information like guest id and probe name which gave the output. >> >> Patch #2: vesper_struct_probedata-0.0.2.patch >> >> Objective: get more information about the probe output >> >> To get more information about the probe output, we created a structure >> called probe_data in vsp-lib.h which has guest id, probe path and >> probe output in it. This structure will be written into the RA socket >> by vsp-appd instead of only probe output. >> >> Changes to Cimha in this commit: >> -------------------------------- >> We made cimha agent as the single entity to collect the probe output >> from vsp-appd and to distribute to all the other Cimha components >> (whoever in need). It creates and listens to the RA socket directly >> (not using vsp_wait_events). Whenever there is some data available on >> RA socket, it collects and writes it into another predefined socket >> called probe data socket (created by grab probe utility - a cimha >> component), if available. >> >> TODO: In addition to the above, cimha agent will buffer the latest >> (few) output(s) from all the probes which can be then be accessed by >> Cimha GUI for displaying purposes. >> >> Your comments are welcome. Thank you. >> >> Best Regards, >> Vignesh >> >> >> 金 成昊 wrote: >>> Dear Vigneswaran-san, >>> >>> Thank you very much for your commit. >>> >>> Regards, >>> Kim >>> >>>> Dear Kim-san, >>>> >>>> We have committed our CIMHA code to sf.net git repository. Please pull >>>> it and test it. Git repository information is available under >>>> <http://sourceforge.net/projects/cimha/develop>. >>>> >>>> In the cimha-convirt directory, all the *cimha* components are moved >>>> inside cimha_src subdirectory. >>>> >>>> Additions >>>> --------- >>>> >>>> 1. KE as a daemon >>>> 2. TL library >>>> 3. probes (as of now, only probes/mem_usage.c probe is usable. we will >>>> commit some more probes soon) >>>> 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe >>>> output >>>> 5. utilities to interact with vsp-appd to collect probe output and to >>>> communicate with KE for taking decision. the grab_probe_output utility >>>> uses "crm_resource" command to initiate migration based on events from >>>> probes like apache_monitor (event driven). >>>> >>>> Please see the README files for more details. Install the components >>>> by following the instructions in them. >>>> >>>> Test Environment >>>> ---------------- >>>> 1. One VM was running on each active and standby node >>>> 2. Vesper was configured (drivers loaded etc.) >>>> 3. vsp-appd was started >>>> 4. Cimha components cimha agent, cmh_ke were started >>>> 5. Cimha component ke_ra (OCF script) was configured to heartbeat as a >>>> resource which starts the other utilities to communicate with vsp-appd >>>> and KE >>>> 6. Apache service inside VM was configured as a resource to Heartbeat >>>> through cmh_ra >>>> 7. a modified version of apache_monitor probe was inserted into guest >>>> using cmh-tool (modified the output format, see the probes/README file) >>>> 8. mem_usage probe was inserted into guest (outputs % of memory usage >>>> periodically) >>>> >>>> Testing >>>> ------- >>>> 1. killing of apache process inside guest immediately triggers >>>> migration of resources (type: event driven) >>>> 2. increasing the memory usage beyond the threshold (95%) specified in >>>> the KE's rules_file also caused the migration (type: polling) >>>> >>>> We have given detailed information in the README files for all the >>>> components. Please go through them. >>>> >>>> TODOs >>>> ----- >>>> 1. Maintaining the inserted probes information in a file (on a shared >>>> disk) so that it can be inserted into the VM in the standby node, on >>>> migration. >>>> 2. Writing probes and KE rules to improve the KE rulebase. >>>> >>>> If you have any doubts, please let us know. >>>> >>>> >>>> Regards, >>>> vignesh >>>> >>> >> >> > > |
From: 金 成昊 <sun...@hi...> - 2010-01-12 03:01:56
|
Dear Vigneswaran-san, > Your comments are welcome. Thank you. No problem for your commit. Just some comments; In our intention with "int (*handle_input)(int fd, struct probe *me)" , let user add handle_input per probe. Modifying "vsp_add_probe" in vsp-lib.c seems a little heavier for this term of the project IMHO. That's why your commit looks appropriate. Thank you very much. Regards, Kim > Dear Kim-san, > > We have made another commit to sf.net git repository which adds a > patch to Vesper-0.0.2, in addition to some changes to our code. > > Totally, we have created two patches. You can find them under, > > $(PATH_TO_CIMHA)/cimha_src/patches > > Both the patches are independent and no common files will be affected. > Currently, the cimha components are based on the assumption that the > patch *vesper_struct_probedata-0.0.2.patch is applied* to > Vesper-0.0.2. The other patch can be ignored (may be I will remove it > from the repository eventually, if it is not needed). > > Hereby, I describe about both the patches for completeness. Please > review them and give us your feedback. > > Patch #1: vesper_wait_events-0.0.2.patch (can be ignored later) > > change #1: removal of break statement from the end of the for loop in > vsp_wait_events function. > > This change makes the function to keep listening to the RA socket. > This helps us to collect probe output continuously. > > change #2: writing back the event handler's return value to the RA > socket. > > RA socket will be used by Vesper as well as Cimha. The vsp-appd writes > probe output to the RA socket. In such case, the event handler's > return value is of no importance. > > In case of Cimha, grab probe utility (which invokes vsp_wait_events, > and defines the event handler) uses RA socket for communicating with > its client (Heartbeat) to provide the kernel helath assessed by KE > based on the collected probe output. In such case, the event handler's > return value is the KE's return value. > > advantages: this patch enable us to collect probe output continuously > and use RA socket to communicate with clients (in both directions). > > disadvantages: this patch is not sufficient to get additional probe > information like guest id and probe name which gave the output. > > Patch #2: vesper_struct_probedata-0.0.2.patch > > Objective: get more information about the probe output > > To get more information about the probe output, we created a structure > called probe_data in vsp-lib.h which has guest id, probe path and > probe output in it. This structure will be written into the RA socket > by vsp-appd instead of only probe output. > > Changes to Cimha in this commit: > -------------------------------- > We made cimha agent as the single entity to collect the probe output > from vsp-appd and to distribute to all the other Cimha components > (whoever in need). It creates and listens to the RA socket directly > (not using vsp_wait_events). Whenever there is some data available on > RA socket, it collects and writes it into another predefined socket > called probe data socket (created by grab probe utility - a cimha > component), if available. > > TODO: In addition to the above, cimha agent will buffer the latest > (few) output(s) from all the probes which can be then be accessed by > Cimha GUI for displaying purposes. > > Your comments are welcome. Thank you. > > Best Regards, > Vignesh > > > 金 成昊 wrote: >> Dear Vigneswaran-san, >> >> Thank you very much for your commit. >> >> Regards, >> Kim >> >>> Dear Kim-san, >>> >>> We have committed our CIMHA code to sf.net git repository. Please pull >>> it and test it. Git repository information is available under >>> <http://sourceforge.net/projects/cimha/develop>. >>> >>> In the cimha-convirt directory, all the *cimha* components are moved >>> inside cimha_src subdirectory. >>> >>> Additions >>> --------- >>> >>> 1. KE as a daemon >>> 2. TL library >>> 3. probes (as of now, only probes/mem_usage.c probe is usable. we will >>> commit some more probes soon) >>> 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe >>> output >>> 5. utilities to interact with vsp-appd to collect probe output and to >>> communicate with KE for taking decision. the grab_probe_output utility >>> uses "crm_resource" command to initiate migration based on events from >>> probes like apache_monitor (event driven). >>> >>> Please see the README files for more details. Install the components >>> by following the instructions in them. >>> >>> Test Environment >>> ---------------- >>> 1. One VM was running on each active and standby node >>> 2. Vesper was configured (drivers loaded etc.) >>> 3. vsp-appd was started >>> 4. Cimha components cimha agent, cmh_ke were started >>> 5. Cimha component ke_ra (OCF script) was configured to heartbeat as a >>> resource which starts the other utilities to communicate with vsp-appd >>> and KE >>> 6. Apache service inside VM was configured as a resource to Heartbeat >>> through cmh_ra >>> 7. a modified version of apache_monitor probe was inserted into guest >>> using cmh-tool (modified the output format, see the probes/README file) >>> 8. mem_usage probe was inserted into guest (outputs % of memory usage >>> periodically) >>> >>> Testing >>> ------- >>> 1. killing of apache process inside guest immediately triggers >>> migration of resources (type: event driven) >>> 2. increasing the memory usage beyond the threshold (95%) specified in >>> the KE's rules_file also caused the migration (type: polling) >>> >>> We have given detailed information in the README files for all the >>> components. Please go through them. >>> >>> TODOs >>> ----- >>> 1. Maintaining the inserted probes information in a file (on a shared >>> disk) so that it can be inserted into the VM in the standby node, on >>> migration. >>> 2. Writing probes and KE rules to improve the KE rulebase. >>> >>> If you have any doubts, please let us know. >>> >>> >>> Regards, >>> vignesh >>> >> >> > > > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2010-01-11 13:25:37
|
Dear Kim-san, We have made another commit to sf.net git repository which adds a patch to Vesper-0.0.2, in addition to some changes to our code. Totally, we have created two patches. You can find them under, $(PATH_TO_CIMHA)/cimha_src/patches Both the patches are independent and no common files will be affected. Currently, the cimha components are based on the assumption that the patch *vesper_struct_probedata-0.0.2.patch is applied* to Vesper-0.0.2. The other patch can be ignored (may be I will remove it from the repository eventually, if it is not needed). Hereby, I describe about both the patches for completeness. Please review them and give us your feedback. Patch #1: vesper_wait_events-0.0.2.patch (can be ignored later) change #1: removal of break statement from the end of the for loop in vsp_wait_events function. This change makes the function to keep listening to the RA socket. This helps us to collect probe output continuously. change #2: writing back the event handler's return value to the RA socket. RA socket will be used by Vesper as well as Cimha. The vsp-appd writes probe output to the RA socket. In such case, the event handler's return value is of no importance. In case of Cimha, grab probe utility (which invokes vsp_wait_events, and defines the event handler) uses RA socket for communicating with its client (Heartbeat) to provide the kernel helath assessed by KE based on the collected probe output. In such case, the event handler's return value is the KE's return value. advantages: this patch enable us to collect probe output continuously and use RA socket to communicate with clients (in both directions). disadvantages: this patch is not sufficient to get additional probe information like guest id and probe name which gave the output. Patch #2: vesper_struct_probedata-0.0.2.patch Objective: get more information about the probe output To get more information about the probe output, we created a structure called probe_data in vsp-lib.h which has guest id, probe path and probe output in it. This structure will be written into the RA socket by vsp-appd instead of only probe output. Changes to Cimha in this commit: -------------------------------- We made cimha agent as the single entity to collect the probe output from vsp-appd and to distribute to all the other Cimha components (whoever in need). It creates and listens to the RA socket directly (not using vsp_wait_events). Whenever there is some data available on RA socket, it collects and writes it into another predefined socket called probe data socket (created by grab probe utility - a cimha component), if available. TODO: In addition to the above, cimha agent will buffer the latest (few) output(s) from all the probes which can be then be accessed by Cimha GUI for displaying purposes. Your comments are welcome. Thank you. Best Regards, Vignesh 金 成昊 wrote: > Dear Vigneswaran-san, > > Thank you very much for your commit. > > Regards, > Kim > >> Dear Kim-san, >> >> We have committed our CIMHA code to sf.net git repository. Please pull >> it and test it. Git repository information is available under >> <http://sourceforge.net/projects/cimha/develop>. >> >> In the cimha-convirt directory, all the *cimha* components are moved >> inside cimha_src subdirectory. >> >> Additions >> --------- >> >> 1. KE as a daemon >> 2. TL library >> 3. probes (as of now, only probes/mem_usage.c probe is usable. we will >> commit some more probes soon) >> 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe output >> 5. utilities to interact with vsp-appd to collect probe output and to >> communicate with KE for taking decision. the grab_probe_output utility >> uses "crm_resource" command to initiate migration based on events from >> probes like apache_monitor (event driven). >> >> Please see the README files for more details. Install the components >> by following the instructions in them. >> >> Test Environment >> ---------------- >> 1. One VM was running on each active and standby node >> 2. Vesper was configured (drivers loaded etc.) >> 3. vsp-appd was started >> 4. Cimha components cimha agent, cmh_ke were started >> 5. Cimha component ke_ra (OCF script) was configured to heartbeat as a >> resource which starts the other utilities to communicate with vsp-appd >> and KE >> 6. Apache service inside VM was configured as a resource to Heartbeat >> through cmh_ra >> 7. a modified version of apache_monitor probe was inserted into guest >> using cmh-tool (modified the output format, see the probes/README file) >> 8. mem_usage probe was inserted into guest (outputs % of memory usage >> periodically) >> >> Testing >> ------- >> 1. killing of apache process inside guest immediately triggers >> migration of resources (type: event driven) >> 2. increasing the memory usage beyond the threshold (95%) specified in >> the KE's rules_file also caused the migration (type: polling) >> >> We have given detailed information in the README files for all the >> components. Please go through them. >> >> TODOs >> ----- >> 1. Maintaining the inserted probes information in a file (on a shared >> disk) so that it can be inserted into the VM in the standby node, on >> migration. >> 2. Writing probes and KE rules to improve the KE rulebase. >> >> If you have any doubts, please let us know. >> >> >> Regards, >> vignesh >> > > |
From: 金 成昊 <sun...@hi...> - 2010-01-04 23:49:22
|
Dear Vigneswaran-san, Thank you very much for your commit. Regards, Kim > Dear Kim-san, > > We have committed our CIMHA code to sf.net git repository. Please pull > it and test it. Git repository information is available under > <http://sourceforge.net/projects/cimha/develop>. > > In the cimha-convirt directory, all the *cimha* components are moved > inside cimha_src subdirectory. > > Additions > --------- > > 1. KE as a daemon > 2. TL library > 3. probes (as of now, only probes/mem_usage.c probe is usable. we will > commit some more probes soon) > 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe output > 5. utilities to interact with vsp-appd to collect probe output and to > communicate with KE for taking decision. the grab_probe_output utility > uses "crm_resource" command to initiate migration based on events from > probes like apache_monitor (event driven). > > Please see the README files for more details. Install the components > by following the instructions in them. > > Test Environment > ---------------- > 1. One VM was running on each active and standby node > 2. Vesper was configured (drivers loaded etc.) > 3. vsp-appd was started > 4. Cimha components cimha agent, cmh_ke were started > 5. Cimha component ke_ra (OCF script) was configured to heartbeat as a > resource which starts the other utilities to communicate with vsp-appd > and KE > 6. Apache service inside VM was configured as a resource to Heartbeat > through cmh_ra > 7. a modified version of apache_monitor probe was inserted into guest > using cmh-tool (modified the output format, see the probes/README file) > 8. mem_usage probe was inserted into guest (outputs % of memory usage > periodically) > > Testing > ------- > 1. killing of apache process inside guest immediately triggers > migration of resources (type: event driven) > 2. increasing the memory usage beyond the threshold (95%) specified in > the KE's rules_file also caused the migration (type: polling) > > We have given detailed information in the README files for all the > components. Please go through them. > > TODOs > ----- > 1. Maintaining the inserted probes information in a file (on a shared > disk) so that it can be inserted into the VM in the standby node, on > migration. > 2. Writing probes and KE rules to improve the KE rulebase. > > If you have any doubts, please let us know. > > > Regards, > vignesh > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2009-12-31 13:28:08
|
Dear Kim-san, We have committed our CIMHA code to sf.net git repository. Please pull it and test it. Git repository information is available under <http://sourceforge.net/projects/cimha/develop>. In the cimha-convirt directory, all the *cimha* components are moved inside cimha_src subdirectory. Additions --------- 1. KE as a daemon 2. TL library 3. probes (as of now, only probes/mem_usage.c probe is usable. we will commit some more probes soon) 4. a patch to libvsp (vsp_wait_events) to keep on collecting probe output 5. utilities to interact with vsp-appd to collect probe output and to communicate with KE for taking decision. the grab_probe_output utility uses "crm_resource" command to initiate migration based on events from probes like apache_monitor (event driven). Please see the README files for more details. Install the components by following the instructions in them. Test Environment ---------------- 1. One VM was running on each active and standby node 2. Vesper was configured (drivers loaded etc.) 3. vsp-appd was started 4. Cimha components cimha agent, cmh_ke were started 5. Cimha component ke_ra (OCF script) was configured to heartbeat as a resource which starts the other utilities to communicate with vsp-appd and KE 6. Apache service inside VM was configured as a resource to Heartbeat through cmh_ra 7. a modified version of apache_monitor probe was inserted into guest using cmh-tool (modified the output format, see the probes/README file) 8. mem_usage probe was inserted into guest (outputs % of memory usage periodically) Testing ------- 1. killing of apache process inside guest immediately triggers migration of resources (type: event driven) 2. increasing the memory usage beyond the threshold (95%) specified in the KE's rules_file also caused the migration (type: polling) We have given detailed information in the README files for all the components. Please go through them. TODOs ----- 1. Maintaining the inserted probes information in a file (on a shared disk) so that it can be inserted into the VM in the standby node, on migration. 2. Writing probes and KE rules to improve the KE rulebase. If you have any doubts, please let us know. Regards, vignesh |
From: 金 成昊 <sun...@hi...> - 2009-12-11 00:05:19
|
Dear Vigneswaran-san, Thank you for your reply. > However, if we cannot migrate the entire VM and we need to start > services on the standby VM (on the standby node), as of now, I don't > know how to sync the process state. We will explore this and let you > know. we do not have to hang around VMs. In the case of 2 physical servers with active/standby, We need some ways to sync the process state of the service running on the active node to stanby node memory. Then we can speed up the failover because migrated service can restart from where it failed down. > We will explore this and let you know. Thank you very much. Best regards, Kim Vigneswaran R さんは書きました: > Dear Kim-san, > > 金 成昊 wrote: >> Dear Vigneswaran-san, >> >> Some helps in finding some open source project involving >> process state sync in HA. >> >> Do you have any idea about that? >> In HA with active/standby mode, usually >> process will be restart on the migrated node. >> In that case, failover time could impact realtime application. >> So that, process state like shared memory data will be >> sync'ed into standby node while process is running in active node. >> >> In the case of disk data, we already have DRBD solution but >> not in case of in-memory data for process. >> For product, Oracle tentimes in-memory db available. If you have any >> info about that, please let us know. > > If we do a live migration of Xen VM from active node to the passive > node, the in-memory data will also be copied. We should keep the VM's > hard disk image in a shared disk. > > However, if we cannot migrate the entire VM and we need to start > services on the standby VM (on the standby node), as of now, I don't > know how to sync the process state. We will explore this and let you > know. > > Thank you. > Vignesh > >> Vigneswaran R さんは書きました: >>> Dear Kim-san, >>> >>> 金 成昊 wrote: >>>> Dear Vigneswaran-san, >>>> >>>>> Regardless of whether we are going to use Vesper RA or Vesper Plugin, >>>>> we need to start/load the driver components (#1). >>>> The form of the drivers will be different when use Vesper Plugin. >>>> Because Vesper Plugin does not have to have frontend/backend drivers. >>>> At current stage, Vesper drivers for the plugin >>>> are to be developed from the scratch for the first in my mind. >>>> Then the integration stage comes in. >>>> It could be a big turn around for the vesper project. >>>> But for this term, RA is good enough for evaluating our usecase with >>>> Cimha, Vesper and KE. >>>> Is it OK to you? >>> Ok. >>> >>> Best Regards, >>> Vignesh >>> >>>> Best regards, >>>> Kim >>>> >>>> Vigneswaran R さんは書きました: >>>>> Dear Kim-san, >>>>> >>>>> 金 成昊 wrote: >>>>>> Dear Vigneswaran-san, >>>>>> >>>>>>> However, if Vesper and Cimha are not dependent on heartbeat version >>>>>>> 2.0.8, IMHO we can upgrade to the latest heartbeat version which >>>>>>> has >>>>>>> SystemHealth daemon. If you thought of some issues in doing so, >>>>>>> please >>>>>>> share with us. >>>>>> Actually, Some considerations there. >>>>>> (1) version topic: >>>>>> -As you know, Xen domain 0 should be kernel 2.6.20 with Fedora. >>>>>> Now with current kernel version 2.6.31, we can use recent >>>>>> kernel >>>>>> as dom0. >>>>>> however, I am not sure 2.6.31 dom0 does well for us. >>>>>> admittedly, I never tested 2.6.31 dom0. >>>>>> What I'd like to say is that there is a need to test if >>>>>> Pacemaker 1.0 and higher version can run in Fedora7 with >>>>>> 2.6.20 >>>>>> dom0 for the first. >>>>> You are right. Sometime back we used heartbeat v2.1.3 on Fedora 7. >>>>> Though we haven't done exhaustive testing, we didn't face any >>>>> problem. >>>>> >>>>>> (2)Vesper as RA or Vesper as Plugins for systemhealth: >>>>>> -Firstly appologies for confusion at the last mail. >>>>>> If we use Vesper for monitoring VMs, IMHO, >>>>>> the best practice is RA. >>>>>> the reason is; >>>>>> --migration: as you know, if VMs has fault, we >>>>>> need >>>>>> to migrate the VMs to >>>>>> the standby with >>>>>> the same state of VMs, I mean probes inserted should be >>>>>> migrated as well. >>>>>> Therefore, we think that Vesper should be migratable in >>>>>> terms >>>>>> of probes controller. >>>>>> However, Plugins or components in heartbeat are cluster >>>>>> aware >>>>>> softwares. >>>>>> Simply put, they are *heartbeat* itself. heartbeat is >>>>>> not to >>>>>> be migrated but >>>>>> synchronized. >>>>> Ok. >>>>> >>>>>> -Other plan in my mind is, Vesper as a part of SystemHealth. >>>>>> The different thing to Vesper RA is that; >>>>>> Vesper monitors *host machine* itself not VMs only. >>>>>> Vesper inserts probes into a physical machine where >>>>>> heartbeat is >>>>>> running. >>>>>> then notify SystemHealth daemon. It can cooperate >>>>>> with >>>>>> hareware monitoring tools like mce and >>>>>> other smart >>>>>> hareware-device to report hardware faults, to monitor the >>>>>> host >>>>>> machine. >>>>>> we sure hardware-device can not cover all faults but only >>>>>> from >>>>>> hareware faults. >>>>>> So, with cooperation with Vepser, we can monitor hareware >>>>>> layer >>>>>> and OS layer as well. >>>>>> even application layer too. >>>>> I will try to elaborate the entire picture. Please let me know >>>>> whether >>>>> I am correct; >>>>> >>>>> When we say Vesper, I can think of 3 logical components. >>>>> >>>>> 1. The drivers (frontend and backend), application layer daemon >>>>> (vspappd) and probes >>>>> 2. Vesper RA (wrapper over vspraex) >>>>> 3. Vesper Plugin >>>>> >>>>> Regardless of whether we are going to use Vesper RA or Vesper Plugin, >>>>> we need to start/load the driver components (#1). >>>>> >>>>> When we use probes for monitoring guest (services/kernel health), use >>>>> Vesper RA (#2). >>>>> >>>>> When we use probes for monitoring Host (h/w, services/kernel health), >>>>> usw Vesper Plugin (#3). >>>>> >>>>> Am I correct? >>>>> >>>>>> In summary, for this term project from Oct to March 2010, the best >>>>>> practice is RA for Vesper as we done earlier. >>>>>> For the next step, Plugins or a part of SystemHealth to monitor >>>>>> hostmachine. >>>>>> Furthermore, Vesper RA and Plugins intergration. >>>>>> RA for VMs health and Plugin for Host health. >>>>>> So, our plan for this term is to test our usecase system with >>>>>> Vesper RA >>>>>> and KE. >>>>>> Next term we continue; >>>>>> -KVM base development env. >>>>>> -Recent Version of Pacemaker + (heartbeat/ openais) >>>>>> #openais is another cluster communcation infra as replacement of >>>>>> heartbeat infra >>>>>> #http://openais.org >>>>>> #SUSE and Redhat has been supporting openais. >>>>>> #Until March of this year, SUSE supported heartbeat but not now. >>>>>> -SystemHealth implementation with Pacemaker. >>>>>> -Integration of RA and Plugins. >>>>> Ok. >>>>> >>>>> Best Regards, >>>>> Vignesh >>>>> >>>>> >>>>> >>>>>>> Dear Kim-san, >>>>>>> >>>>>>> 金 成昊 wrote: >>>>>>>> Dear Vigneswaran-san and Viswanath-san, >>>>>>>> >>>>>>>> Regarding VESPER-Plugin for heartbeat-2.0.8, >>>>>>>> we had better implementing it as vsp_appd daemon process >>>>>>>> just like "SystemHealth" daemon in Pacemaker. >>>>>>>> Please refer to http://clusterlabs.org/wiki/SystemHealth. >>>>>>>> >>>>>>>> If the version having SystemHealth daemon, >>>>>>>> we can implement VESPER-Plugin for the daemon. >>>>>>>> However, we dont have any infra like SystemHealth daemon for >>>>>>>> heartbeat-2.0.8 >>>>>>>> to fit VESPER-Plugin in. >>>>>>>> It only has infras for plug-ins as below, >>>>>>>> -HBAuth #for msg digest >>>>>>>> -HBcomm #communication media >>>>>>>> -RAexec #RA exec such as LSB, OCF, etc. >>>>>>>> -quroum[d] #quorum server >>>>>>>> -etc.. >>>>>>>> >>>>>>>> So, we decide to implement vsp_appd daemon to monitor kernel >>>>>>>> health >>>>>>>> for >>>>>>>> heartbeat-2.0.8 >>>>>>>> Then we can simply add a line "respawn root >>>>>>>> /where_to_vsp_appd/vsp_appd" >>>>>>>> in /etc/ha.d/ha.cf only if we want VESPER run as a systemhealth >>>>>>>> deamon >>>>>>>> rather than as RA. >>>>>>>> VESPER, then, invokes "attrd" to notify any probe events. >>>>>>>> IIRC, Other subsystem like crmd, stonish, lrmd, etc are >>>>>>>> respwaning by >>>>>>>> the keword "crm on" in /etc/ha.d/ha.cf as default. >>>>>>>> >>>>>>>> Agree for "vsp_appd" daemon? >>>>>>> Yes, we agree with you. >>>>>>> >>>>>>>> Any comments welcome. >>>>>>>> we are looking forward to hear your idea. >>>>>>> However, if Vesper and Cimha are not dependent on heartbeat version >>>>>>> 2.0.8, IMHO we can upgrade to the latest heartbeat version which >>>>>>> has >>>>>>> SystemHealth daemon. If you thought of some issues in doing so, >>>>>>> please >>>>>>> share with us. >>>>>>> >>>>>>> Thank you. >>>>>>> Vignesh >>>>>>> >>>>>>>> Thank you. >>>>>>>> Kim >>>>>>>> >>>>>>>> >>>>>>>>> Dear Kim-san, >>>>>>>>> >>>>>>>>> >>>>>>>>> It is the upstream one , to be specific the verison is >>>>>>>>> 2.0.8-1 >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Viswanath T K >>>>>>>>> >>>>>>>>> On Tue, 2009-11-10 at 14:49 +0900, 金 成昊 wrote: >>>>>>>>> >>>>>>>>>> Dear Viswanath-san, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> The version of heartbeat used for CIMHA is heartbeat-2.0.8 . >>>>>>>>>>> >>>>>>>>>> Thank you for your reply. >>>>>>>>>> Is it upstream one or fedora distro? >>>>>>>>>> >>>>>>>>>> Thank you. >>>>>>>>>> Regards, >>>>>>>>>> Kim >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Dear Kim-san, >>>>>>>>>>> >>>>>>>>>>> The version of heartbeat used for CIMHA is >>>>>>>>>>> heartbeat-2.0.8 . >>>>>>>>>>> Please let us know if you need any other information. >>>>>>>>>>> >>>>>>>>>>> Regards, >>>>>>>>>>> Viswanath >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, 2009-11-10 at 11:58 +0900, 金 成昊 wrote: >>>>>>>>>>> >>>>>>>>>>>> Dear Vigneswaran-san, >>>>>>>>>>>> >>>>>>>>>>>> Could you tell me which version of Heartbeat you are using for >>>>>>>>>>>> CIMHA? >>>>>>>>>>>> I need that information to test VESPER plugin. >>>>>>>>>>>> >>>>>>>>>>>> Thank you. >>>>>>>>>>>> Regards, >>>>>>>>>>>> Kim >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> Dear Kim-san, >>>>>>>>>>>>> >>>>>>>>>>>>> We are going through the heartbeat documents for better >>>>>>>>>>>>> understanding >>>>>>>>>>>>> of it and building a dedicated data center like setup >>>>>>>>>>>>> (starting >>>>>>>>>>>>> with 2 >>>>>>>>>>>>> machines only) for testing heartbeat and our applications. >>>>>>>>>>>>> Previously >>>>>>>>>>>>> we used our machines and did the testing. >>>>>>>>>>>>> >>>>>>>>>>>>> I thought of sharing the following URLs which has information >>>>>>>>>>>>> about >>>>>>>>>>>>> heartbeat testing procedures (you may know it already). >>>>>>>>>>>>> >>>>>>>>>>>>> http://www.linux-ha.org/CTS >>>>>>>>>>>>> http://www.linux-ha.org/DevCorner/TestPlan >>>>>>>>>>>>> >>>>>>>>>>>>> Best Regards, >>>>>>>>>>>>> Vignesh >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>> >>> >>> >> >> > > > -- --- Sungho KIM Linux Technology Center Hitachi, Ltd., Systems Development Laboratory E-mail: sun...@hi... |
From: Vigneswaran R <vi...@at...> - 2009-12-10 12:07:59
|
Dear Kim-san, 金 成昊 wrote: > Dear Vigneswaran-san, > > Some helps in finding some open source project involving > process state sync in HA. > > Do you have any idea about that? > In HA with active/standby mode, usually > process will be restart on the migrated node. > In that case, failover time could impact realtime application. > So that, process state like shared memory data will be > sync'ed into standby node while process is running in active node. > > In the case of disk data, we already have DRBD solution but > not in case of in-memory data for process. > > For product, Oracle tentimes in-memory db available. > If you have any info about that, please let us know. If we do a live migration of Xen VM from active node to the passive node, the in-memory data will also be copied. We should keep the VM's hard disk image in a shared disk. However, if we cannot migrate the entire VM and we need to start services on the standby VM (on the standby node), as of now, I don't know how to sync the process state. We will explore this and let you know. Thank you. Vignesh > Vigneswaran R さんは書きました: >> Dear Kim-san, >> >> 金 成昊 wrote: >>> Dear Vigneswaran-san, >>> >>>> Regardless of whether we are going to use Vesper RA or Vesper Plugin, >>>> we need to start/load the driver components (#1). >>> The form of the drivers will be different when use Vesper Plugin. >>> Because Vesper Plugin does not have to have frontend/backend drivers. >>> At current stage, Vesper drivers for the plugin >>> are to be developed from the scratch for the first in my mind. >>> Then the integration stage comes in. >>> It could be a big turn around for the vesper project. >>> But for this term, RA is good enough for evaluating our usecase with >>> Cimha, Vesper and KE. >>> Is it OK to you? >> Ok. >> >> Best Regards, >> Vignesh >> >>> Best regards, >>> Kim >>> >>> Vigneswaran R さんは書きました: >>>> Dear Kim-san, >>>> >>>> 金 成昊 wrote: >>>>> Dear Vigneswaran-san, >>>>> >>>>>> However, if Vesper and Cimha are not dependent on heartbeat version >>>>>> 2.0.8, IMHO we can upgrade to the latest heartbeat version which has >>>>>> SystemHealth daemon. If you thought of some issues in doing so, >>>>>> please >>>>>> share with us. >>>>> Actually, Some considerations there. >>>>> (1) version topic: >>>>> -As you know, Xen domain 0 should be kernel 2.6.20 with Fedora. >>>>> Now with current kernel version 2.6.31, we can use recent kernel >>>>> as dom0. >>>>> however, I am not sure 2.6.31 dom0 does well for us. >>>>> admittedly, I never tested 2.6.31 dom0. >>>>> What I'd like to say is that there is a need to test if >>>>> Pacemaker 1.0 and higher version can run in Fedora7 with 2.6.20 >>>>> dom0 for the first. >>>> You are right. Sometime back we used heartbeat v2.1.3 on Fedora 7. >>>> Though we haven't done exhaustive testing, we didn't face any problem. >>>> >>>>> (2)Vesper as RA or Vesper as Plugins for systemhealth: >>>>> -Firstly appologies for confusion at the last mail. >>>>> If we use Vesper for monitoring VMs, IMHO, >>>>> the best practice is RA. >>>>> the reason is; >>>>> --migration: as you know, if VMs has fault, we need >>>>> to migrate the VMs to >>>>> the standby with >>>>> the same state of VMs, I mean probes inserted should be >>>>> migrated as well. >>>>> Therefore, we think that Vesper should be migratable in >>>>> terms >>>>> of probes controller. >>>>> However, Plugins or components in heartbeat are cluster >>>>> aware >>>>> softwares. >>>>> Simply put, they are *heartbeat* itself. heartbeat is >>>>> not to >>>>> be migrated but >>>>> synchronized. >>>> Ok. >>>> >>>>> -Other plan in my mind is, Vesper as a part of SystemHealth. >>>>> The different thing to Vesper RA is that; >>>>> Vesper monitors *host machine* itself not VMs only. >>>>> Vesper inserts probes into a physical machine where >>>>> heartbeat is >>>>> running. >>>>> then notify SystemHealth daemon. It can cooperate with >>>>> hareware monitoring tools like mce and >>>>> other smart >>>>> hareware-device to report hardware faults, to monitor the host >>>>> machine. >>>>> we sure hardware-device can not cover all faults but only from >>>>> hareware faults. >>>>> So, with cooperation with Vepser, we can monitor hareware layer >>>>> and OS layer as well. >>>>> even application layer too. >>>> I will try to elaborate the entire picture. Please let me know whether >>>> I am correct; >>>> >>>> When we say Vesper, I can think of 3 logical components. >>>> >>>> 1. The drivers (frontend and backend), application layer daemon >>>> (vspappd) and probes >>>> 2. Vesper RA (wrapper over vspraex) >>>> 3. Vesper Plugin >>>> >>>> Regardless of whether we are going to use Vesper RA or Vesper Plugin, >>>> we need to start/load the driver components (#1). >>>> >>>> When we use probes for monitoring guest (services/kernel health), use >>>> Vesper RA (#2). >>>> >>>> When we use probes for monitoring Host (h/w, services/kernel health), >>>> usw Vesper Plugin (#3). >>>> >>>> Am I correct? >>>> >>>>> In summary, for this term project from Oct to March 2010, the best >>>>> practice is RA for Vesper as we done earlier. >>>>> For the next step, Plugins or a part of SystemHealth to monitor >>>>> hostmachine. >>>>> Furthermore, Vesper RA and Plugins intergration. >>>>> RA for VMs health and Plugin for Host health. >>>>> So, our plan for this term is to test our usecase system with >>>>> Vesper RA >>>>> and KE. >>>>> Next term we continue; >>>>> -KVM base development env. >>>>> -Recent Version of Pacemaker + (heartbeat/ openais) >>>>> #openais is another cluster communcation infra as replacement of >>>>> heartbeat infra >>>>> #http://openais.org >>>>> #SUSE and Redhat has been supporting openais. >>>>> #Until March of this year, SUSE supported heartbeat but not now. >>>>> -SystemHealth implementation with Pacemaker. >>>>> -Integration of RA and Plugins. >>>> Ok. >>>> >>>> Best Regards, >>>> Vignesh >>>> >>>> >>>> >>>>>> Dear Kim-san, >>>>>> >>>>>> 金 成昊 wrote: >>>>>>> Dear Vigneswaran-san and Viswanath-san, >>>>>>> >>>>>>> Regarding VESPER-Plugin for heartbeat-2.0.8, >>>>>>> we had better implementing it as vsp_appd daemon process >>>>>>> just like "SystemHealth" daemon in Pacemaker. >>>>>>> Please refer to http://clusterlabs.org/wiki/SystemHealth. >>>>>>> >>>>>>> If the version having SystemHealth daemon, >>>>>>> we can implement VESPER-Plugin for the daemon. >>>>>>> However, we dont have any infra like SystemHealth daemon for >>>>>>> heartbeat-2.0.8 >>>>>>> to fit VESPER-Plugin in. >>>>>>> It only has infras for plug-ins as below, >>>>>>> -HBAuth #for msg digest >>>>>>> -HBcomm #communication media >>>>>>> -RAexec #RA exec such as LSB, OCF, etc. >>>>>>> -quroum[d] #quorum server >>>>>>> -etc.. >>>>>>> >>>>>>> So, we decide to implement vsp_appd daemon to monitor kernel health >>>>>>> for >>>>>>> heartbeat-2.0.8 >>>>>>> Then we can simply add a line "respawn root >>>>>>> /where_to_vsp_appd/vsp_appd" >>>>>>> in /etc/ha.d/ha.cf only if we want VESPER run as a systemhealth >>>>>>> deamon >>>>>>> rather than as RA. >>>>>>> VESPER, then, invokes "attrd" to notify any probe events. >>>>>>> IIRC, Other subsystem like crmd, stonish, lrmd, etc are >>>>>>> respwaning by >>>>>>> the keword "crm on" in /etc/ha.d/ha.cf as default. >>>>>>> >>>>>>> Agree for "vsp_appd" daemon? >>>>>> Yes, we agree with you. >>>>>> >>>>>>> Any comments welcome. >>>>>>> we are looking forward to hear your idea. >>>>>> However, if Vesper and Cimha are not dependent on heartbeat version >>>>>> 2.0.8, IMHO we can upgrade to the latest heartbeat version which has >>>>>> SystemHealth daemon. If you thought of some issues in doing so, >>>>>> please >>>>>> share with us. >>>>>> >>>>>> Thank you. >>>>>> Vignesh >>>>>> >>>>>>> Thank you. >>>>>>> Kim >>>>>>> >>>>>>> >>>>>>>> Dear Kim-san, >>>>>>>> >>>>>>>> >>>>>>>> It is the upstream one , to be specific the verison is 2.0.8-1 >>>>>>>> >>>>>>>> Regards, >>>>>>>> Viswanath T K >>>>>>>> >>>>>>>> On Tue, 2009-11-10 at 14:49 +0900, 金 成昊 wrote: >>>>>>>> >>>>>>>>> Dear Viswanath-san, >>>>>>>>> >>>>>>>>> >>>>>>>>>> The version of heartbeat used for CIMHA is heartbeat-2.0.8 . >>>>>>>>>> >>>>>>>>> Thank you for your reply. >>>>>>>>> Is it upstream one or fedora distro? >>>>>>>>> >>>>>>>>> Thank you. >>>>>>>>> Regards, >>>>>>>>> Kim >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> Dear Kim-san, >>>>>>>>>> >>>>>>>>>> The version of heartbeat used for CIMHA is heartbeat-2.0.8 . >>>>>>>>>> Please let us know if you need any other information. >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Viswanath >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, 2009-11-10 at 11:58 +0900, 金 成昊 wrote: >>>>>>>>>> >>>>>>>>>>> Dear Vigneswaran-san, >>>>>>>>>>> >>>>>>>>>>> Could you tell me which version of Heartbeat you are using for >>>>>>>>>>> CIMHA? >>>>>>>>>>> I need that information to test VESPER plugin. >>>>>>>>>>> >>>>>>>>>>> Thank you. >>>>>>>>>>> Regards, >>>>>>>>>>> Kim >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> Dear Kim-san, >>>>>>>>>>>> >>>>>>>>>>>> We are going through the heartbeat documents for better >>>>>>>>>>>> understanding >>>>>>>>>>>> of it and building a dedicated data center like setup (starting >>>>>>>>>>>> with 2 >>>>>>>>>>>> machines only) for testing heartbeat and our applications. >>>>>>>>>>>> Previously >>>>>>>>>>>> we used our machines and did the testing. >>>>>>>>>>>> >>>>>>>>>>>> I thought of sharing the following URLs which has information >>>>>>>>>>>> about >>>>>>>>>>>> heartbeat testing procedures (you may know it already). >>>>>>>>>>>> >>>>>>>>>>>> http://www.linux-ha.org/CTS >>>>>>>>>>>> http://www.linux-ha.org/DevCorner/TestPlan >>>>>>>>>>>> >>>>>>>>>>>> Best Regards, >>>>>>>>>>>> Vignesh >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>> >> >> > > |