You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(25) |
Jul
(13) |
Aug
(11) |
Sep
(14) |
Oct
(5) |
Nov
(7) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(14) |
Feb
(10) |
Mar
(30) |
Apr
(9) |
May
(20) |
Jun
(12) |
Jul
(7) |
Aug
(6) |
Sep
(6) |
Oct
(34) |
Nov
(14) |
Dec
(9) |
2003 |
Jan
|
Feb
(9) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
(14) |
Jul
(1) |
Aug
(7) |
Sep
(6) |
Oct
(5) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(7) |
Dec
(1) |
2005 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(5) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(11) |
Jun
(2) |
Jul
|
Aug
(5) |
Sep
(5) |
Oct
(1) |
Nov
(1) |
Dec
|
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Brian J. W. <Bri...@co...> - 2002-10-29 22:13:11
|
Laura and Dave- These changes appear to affect your code. Are you both okay with them? -Brian "Aneesh Kumar K.V" wrote: > > Hi, > > Attaching below is the changes needed to build the CVS version of CI. > Mostly #ifdef changes. I am not sure i am using the right #ifdef . Can > someone verify it and do the necessary checkin. ? > > -aneesh > > --- Cluster/ci-linux/ci-kernel/cluster/util/nsc_init.c Thu Aug 8 05:30:57 2002 > +++ TEST/ci-linux/ci-kernel/cluster/util/nsc_init.c Mon Oct 21 22:51:34 2002 > @@ -37,8 +37,8 @@ > #include <cluster/clms.h> > #include <cluster/icsgen.h> > #include <cluster/ics_proto.h> > -#include <cluster/procfs.h> > #ifdef CONFIG_SSI > +#include <cluster/procfs.h> > #include <cluster/ssi/ssidev.h> > #endif /* CONFIG_SSI */ > #ifdef CONFIG_MOSIX_LL > @@ -287,8 +287,10 @@ > #endif > #endif /* NOTYET */ > > +#ifdef CONFIG_SSI /* Or should it be CONFIG_LDLVL ??*/ > /* Create /proc/cluster/node# directory in /proc */ > proc_cluster_init(); > +#endif > > #ifdef CONFIG_MOSIX_LL > init_mosix(); > > --- Cluster/ci-linux/ci-kernel/cluster/util/nsc_scalls.c Sun Oct 6 12:27:06 2002 > +++ TEST/ci-linux/ci-kernel/cluster/util/nsc_scalls.c Mon Oct 21 22:37:45 2002 > @@ -44,7 +44,9 @@ > #include <cluster/synch.h> > #include <cluster/ssisys.h> > #include <cluster/table.h> > +#ifdef CONFIG_CFS > #include <cluster/ssi/cfs/cfs_mount.h> > +#endif > > #include "ics_cluster_api_protos_gen.h" > #include "ics_cluster_api_macros_gen.h" > > --- Cluster/ci-linux/ci-kernel/cluster/util/cluster_api.svc Sat Jul 13 06:52:03 2002 > +++ TEST/ci-linux/ci-kernel/cluster/util/cluster_api.svc Mon Oct 21 22:36:55 2002 > @@ -36,6 +36,8 @@ > param OUT int *onlinecpus > } > > +#ifdef CONFIG_CFS /* Do we pass this flag when doing icsgen ? */ > + > operation rcluster_get_mount NO_SIG_FORWARD { > param IN int my_node > param INOUT int *cookie > @@ -45,3 +47,4 @@ > param OUT:OOL:VAR char **payload > param OUT:OOL:VAR char **dev_name > } > +#endif |
From: Aneesh K. K.V <ane...@di...> - 2002-10-28 11:52:19
|
On Mon, 2002-10-28 at 17:06, Sharad Tiwari wrote: > hi all > > * while using CI if want to upgrade to a newer version of ssi do I need to bring down the whole cluster or I can insert some modules dynamically to get the upgrade reflected ? the aim is to have an interface that can be used by modules. But not yet fully supported. > * What features of SSI will not be available if I decide not to use CFS . See my mail ssi list > * How do I share a hardware/software resource? where is this information (of shared resources from all nodes) kept See my mail to ssi list. > > Thanking you all for the help .... > > regards, > Sharad. > > ---- > |
From: Sharad T. <sha...@wi...> - 2002-10-28 11:37:03
|
hi all * while using CI if want to upgrade to a newer version of ssi do I need = to bring down the whole cluster or I can insert some modules dynamically = to get the upgrade reflected ? * What features of SSI will not be available if I decide not to use CFS = . * How do I share a hardware/software resource? where is this information = (of shared resources from all nodes) kept ? Thanking you all for the help .... regards, Sharad. |
From: Sharad T. <sha...@wi...> - 2002-10-28 11:36:20
|
hi all 1) while using CI if want to upgrade to a newer version of ssi do I = need to bring down the whole cluster or I can insert some modules = dynamically to get the upgrade reflected ? 2) What features of SSI will not be available if I decide not to use CFS = . 3) how do I share a hardware/software resource? where is this = information (of shared resources from all nodes) kept ? thanking you all in advance for the help |
From: Aneesh K. K.V <ane...@di...> - 2002-10-26 04:54:54
|
On Sat, 2002-10-26 at 03:28, Brian J. Watson wrote: > "Aneesh Kumar K.V" wrote: > > > > Hi, > > > > Attaching below is the changes needed to build the CVS version of CI. > > Mostly #ifdef changes. I am not sure i am using the right #ifdef . Can > > someone verify it and do the necessary checkin. ? > > > I'm a bit torn about the issue of SSI code existing in the CI > repository. Ideally it shouldn't be there. Any CI file touched by SSI > should exist in both repositories, with the SSI changes made to only the > SSI version. > > Although this would be the purest approach, it also creates a bit of a > maintenance headache for CI. Any change made to the CI version of one of > these files must also be made to the SSI version. Ease of maintenance > argues for putting SSI specific changes into the CI repository, and > deactivating them with #ifdef CONFIG_SSI, CONFIG_CFS, etc., as Aneesh > has done. > > Are there any comments on which approach would be better -- purity vs. > ease of maintenance? > > -Brian > > I vote for ease of maintenance. The purity argument will be dropped if we are planning to merge with Main Line Linux kernel :) ( I hope this will happen ) -aneesh |
From: Brian J. W. <Bri...@co...> - 2002-10-25 22:06:56
|
"Aneesh Kumar K.V" wrote: > > Hi, > > Attaching below is the changes needed to build the CVS version of CI. > Mostly #ifdef changes. I am not sure i am using the right #ifdef . Can > someone verify it and do the necessary checkin. ? I'm a bit torn about the issue of SSI code existing in the CI repository. Ideally it shouldn't be there. Any CI file touched by SSI should exist in both repositories, with the SSI changes made to only the SSI version. Although this would be the purest approach, it also creates a bit of a maintenance headache for CI. Any change made to the CI version of one of these files must also be made to the SSI version. Ease of maintenance argues for putting SSI specific changes into the CI repository, and deactivating them with #ifdef CONFIG_SSI, CONFIG_CFS, etc., as Aneesh has done. Are there any comments on which approach would be better -- purity vs. ease of maintenance? -Brian |
From: Brian J. W. <Bri...@co...> - 2002-10-25 22:02:41
|
"Aneesh Kumar K.V" wrote: > > Hi, > > Enhancing CI documentation at > > http://ci-linux.sourceforge.net/enhancing.shtml discuss about different > API's for registering new cluster services. How do i unregister those > services.( If i want to implement those services in the form of kernel > modules ) I'm not sure the unregistering part has been implemented. There's probably still work to be done before CI/SSI components can be built as modules. -Brian |
From: Brian J. W. <Bri...@hp...> - 2002-10-25 22:01:44
|
"Aneesh Kumar K.V" wrote: > > Hi, > > Anybody looking into these issues ?. Is there anything wrong in using > the ics_vproc_chan for ipvs-bind/CVIP related functionality. > Since Kai's left, there's no ICS expert on the CI or SSI project. I'm not familiar with the issues of selecting channels. I imagine it's okay to use ics_vproc_chan for now, then switch to something else in the future if it's a problem. It's probably a bug that you can't easily add another channel. Can you add it to the SF database? -Brian |
From: Aneesh K. K.V <ane...@di...> - 2002-10-23 09:46:30
|
Hi, Enhancing CI documentation at http://ci-linux.sourceforge.net/enhancing.shtml discuss about different API's for registering new cluster services. How do i unregister those services.( If i want to implement those services in the form of kernel modules ) -aneesh |
From: Aneesh K. K.V <ane...@di...> - 2002-10-21 05:34:58
|
Hi, Attaching below is the changes needed to build the CVS version of CI. Mostly #ifdef changes. I am not sure i am using the right #ifdef . Can someone verify it and do the necessary checkin. ? -aneesh --- Cluster/ci-linux/ci-kernel/cluster/util/nsc_init.c Thu Aug 8 05:30:57 2002 +++ TEST/ci-linux/ci-kernel/cluster/util/nsc_init.c Mon Oct 21 22:51:34 2002 @@ -37,8 +37,8 @@ #include <cluster/clms.h> #include <cluster/icsgen.h> #include <cluster/ics_proto.h> -#include <cluster/procfs.h> #ifdef CONFIG_SSI +#include <cluster/procfs.h> #include <cluster/ssi/ssidev.h> #endif /* CONFIG_SSI */ #ifdef CONFIG_MOSIX_LL @@ -287,8 +287,10 @@ #endif #endif /* NOTYET */ +#ifdef CONFIG_SSI /* Or should it be CONFIG_LDLVL ??*/ /* Create /proc/cluster/node# directory in /proc */ proc_cluster_init(); +#endif #ifdef CONFIG_MOSIX_LL init_mosix(); --- Cluster/ci-linux/ci-kernel/cluster/util/nsc_scalls.c Sun Oct 6 12:27:06 2002 +++ TEST/ci-linux/ci-kernel/cluster/util/nsc_scalls.c Mon Oct 21 22:37:45 2002 @@ -44,7 +44,9 @@ #include <cluster/synch.h> #include <cluster/ssisys.h> #include <cluster/table.h> +#ifdef CONFIG_CFS #include <cluster/ssi/cfs/cfs_mount.h> +#endif #include "ics_cluster_api_protos_gen.h" #include "ics_cluster_api_macros_gen.h" --- Cluster/ci-linux/ci-kernel/cluster/util/cluster_api.svc Sat Jul 13 06:52:03 2002 +++ TEST/ci-linux/ci-kernel/cluster/util/cluster_api.svc Mon Oct 21 22:36:55 2002 @@ -36,6 +36,8 @@ param OUT int *onlinecpus } +#ifdef CONFIG_CFS /* Do we pass this flag when doing icsgen ? */ + operation rcluster_get_mount NO_SIG_FORWARD { param IN int my_node param INOUT int *cookie @@ -45,3 +47,4 @@ param OUT:OOL:VAR char **payload param OUT:OOL:VAR char **dev_name } +#endif |
From: Brian J. W. <Bri...@hp...> - 2002-10-14 18:56:29
|
> Registering a new ICS Channel > ============================== > 1) Declare the new channel using type ics_chan_t (typedef int). > > #include <cluster/ics.h> > ics_chant_t ics_test_chan; > ^^^^^^^^ > -----> should be ics_chan_t Thanks, Aneesh. I checked in a fix for this. -Brian |
From: Aneesh K. K.V <ane...@di...> - 2002-10-14 13:05:20
|
Hi, Registering a new ICS Channel ============================== 1) Declare the new channel using type ics_chan_t (typedef int). #include <cluster/ics.h> ics_chant_t ics_test_chan; ^^^^^^^^ -----> should be ics_chan_t -aneesh |
From: Brian J. W. <Bri...@hp...> - 2002-10-11 21:41:46
|
I'll do a release of CI around the time I do the 2.4.18-based release of SSI. That should be sometime in the next couple of weeks. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Brian J. <bri...@md...> - 2002-10-10 16:57:25
|
Sorry it took me so long to respond to this email, I set it aside to remember to respond to it and of course didn't remember. I set up an opendlm development mailing list with the OpenDLM project at sourceforge, and since there will be people other than the OpenGFS people working on it, it makes since to use it instead of the OpenDLM mailing list. Aneesh Kumar K.V writes: > Hi, > > I have checked in the code for CI integration. The dlmdu daemon is now > not used for CI. Tested the code base on a single node machine. I have > tested the code base by running test case convert_test.sh. Next weekend > i will try to get the code base tested against multiple nodes.Meanwhile > other interested guys can take a look at the code. Patches and bug > reports are welcome :) . Information regarding building and configuring > are in INSTALL.opendlm. > > I also request other people working on heartbeat and rsct to also look > at the code. > > Which mailing list we should use for DLM related development purpose. > IBM mailing list or open-gfs mailing list. Again is it possible to add a > checkin notification mailing list. There is a similar one for CI/SSI > > > -aneesh > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Opengfs-devel mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opengfs-devel |
From: Brian J. W. <Bri...@hp...> - 2002-10-07 22:42:08
|
"Aneesh Kumar K.V" wrote: > Also i tried to build only CI from a SSI code base. The build failed in > some CFS part. I will try to build CI from the CVS code base tomorrow. I'm not sure the CVS code base is quite right, either. When I moved the repositories to SourceForge, I never made sure the CI repository still built okay. There may be some minor problems. Can you enter a bug against SSI that building for just CI doesn't work? -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Aneesh K. K.V <ane...@di...> - 2002-10-07 06:33:54
|
Hi, I have checked in the code for CI integration. The dlmdu daemon is now not used for CI. Tested the code base on a single node machine. I have tested the code base by running test case convert_test.sh. Next weekend i will try to get the code base tested against multiple nodes.Meanwhile other interested guys can take a look at the code. Patches and bug reports are welcome :) . Information regarding building and configuring are in INSTALL.opendlm. I also request other people working on heartbeat and rsct to also look at the code. Which mailing list we should use for DLM related development purpose. IBM mailing list or open-gfs mailing list. Again is it possible to add a checkin notification mailing list. There is a similar one for CI/SSI -aneesh |
From: Aneesh K. K.V <ane...@di...> - 2002-10-05 14:24:37
|
Hi, I was trying to do some testing of openDLM with CI and found that there are many issues which are already fixed in CVS but not yet in the uploaded tar.gz. IIRC during DLM CI integration some changes were made to the code and i guess they are still in the CVS not in the uploaded tar.gz. Also i tried to build only CI from a SSI code base. The build failed in some CFS part. I will try to build CI from the CVS code base tomorrow. -aneesh |
From: Aneesh K. K.V <ane...@di...> - 2002-10-03 04:43:51
|
Hi, Yes, What David Chow is more worried about is the separate communication channel needed for DLM. In the case of CI we already have the IP based communication channel being used for cluster node communication and I guess the production system will be running SSI cluster with high speed switches . But from the opengfs perspective asking for a expensive switch for just DLM may not be a cost effective one. Yes we may need to change DLM to use CI even in node communication. CI has the concept of throttling that prevents the communication channel between nodes from getting saturated. ( I guess it is not yet implemented ). In that way we have a control over DLM messages. Any how making DLM as the locking solution in SSI/CI is something not yet decided.So no need to worry about that :) -aneesh On Wed, 2002-10-02 at 20:12, Greg Freemyer wrote: > > Did you see David Chow's message on the OpenGFS list that he had a recent discussion with some IBM engineers and they said that the IBM DLM has pretty excessive interconnect requirements. > > Just a few nodes could saturate a 100 Mbit interconnect and that to do a large scale network, myrinet would be required. > > Obviously it is nice that IBM's DLM (or OpenDLM) can utilize CI for node up/down monitoring, but it makes one wonder if this is the right locking solution to be a core piece of SSI/CI. > > Greg Freemyer > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > ci-linux-devel mailing list > ci-...@li... > https://lists.sourceforge.net/lists/listinfo/ci-linux-devel |
From: Greg F. <fre...@No...> - 2002-10-03 04:21:57
|
Did you see David Chow's message on the OpenGFS list that he had a recent = discussion with some IBM engineers and they said that the IBM DLM has pretty = excessive interconnect requirements. Just a few nodes could saturate a 100 Mbit interconnect and that to do a large = scale network, myrinet would be required. Obviously it is nice that IBM's DLM (or OpenDLM) can utilize CI for node = up/down monitoring, but it makes one wonder if this is the right locking = solution to be a core piece of SSI/CI. Greg Freemyer |
From: Brian J. W. <Bri...@hp...> - 2002-10-02 21:18:59
|
"Aneesh Kumar K.V" wrote: > > Hi, > > The patch attached below makes DLM fully kernel based with respect to CI > (http://ci-linux.sf.net ).Please comment about the same. This is a meta-issue, but it's better to build a patch with the -u option to diff. That way it'll include some context, which can be used to patch files that have been slightly modified already. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Aneesh K. K.V <ane...@di...> - 2002-10-02 08:33:06
|
Hi, The patch attached below makes DLM fully kernel based with respect to CI (http://ci-linux.sf.net ).Please comment about the same. Status: The code builds. I don't have machines lying around to test it with the Cluster.Once i get hold of free machines i will do the same. Meanwhile people interested can pull the code from opendlm CVS. Architecture. i386. NOTE: I am adding the code to opendlm CVS. So heartbeat and rsct people may also be interested in this patch. Changes to the generic code: src/Makefile => the subdir is now passed from the configure. signal handling in the case of DLM threads. added new module parameter( haDLM_max_node) which is used only for CI but is generic. -aneesh Index: configure.ac =================================================================== RCS file: /cvsroot/opendlm/opendlm/configure.ac,v retrieving revision 1.2 diff -r1.2 configure.ac 56a57,59 > AC_ARG_WITH( > ci_linux, [--with-ci_linux use Cluster Infrastructure for Linux], > CLUSTERMGT=ci_linux) 120a124,125 > srcsub_dir="make include api kernel user" > AC_SUBST(srcsub_dir) 129a135,136 > srcsub_dir="make include api kernel user" > AC_SUBST(srcsub_dir) 135a143,147 > ci_linux) > srcsub_dir="make include api kernel " > AC_SUBST(srcsub_dir) > AC_DEFINE(USE_CI_LINUX, , Use CI clustering SW) > ;; 140a153 > 204d216 < src/make/Makefile Index: src/Makefile.am =================================================================== RCS file: /cvsroot/opendlm/opendlm/src/Makefile.am,v retrieving revision 1.1.1.1 diff -r1.1.1.1 Makefile.am 4c4 < SUBDIRS = make include api kernel user --- > SUBDIRS = @srcsub_dir@ Index: src/include/dlm_kernel.h =================================================================== RCS file: /cvsroot/opendlm/opendlm/src/include/dlm_kernel.h,v retrieving revision 1.1.1.1 diff -r1.1.1.1 dlm_kernel.h 238a239,240 > #ifndef LINUX_VERSION_CODE > #include <linux/version.h> 243a246 > #endif Index: src/kernel/dlmcccp/cccp_udp.c =================================================================== RCS file: /cvsroot/opendlm/opendlm/src/kernel/dlmcccp/cccp_udp.c,v retrieving revision 1.1.1.1 diff -r1.1.1.1 cccp_udp.c 28a29,33 > *21March02 Kai-Min Sung <Kai...@co...> > *make cccp_poll_thread() smarter about incoming signals. > */ > > /* 167a173 > recvmsg: 170a177,188 > /* If we get any signals, other than SIGINT, just flush them and > * restart the socket read. > */ > if ( ((-bytes == EINTR) || (-bytes == ERESTARTSYS)) > && !sigismember(¤t->pending.signal, SIGINT)) { > unsigned long _flags; > spin_lock_irqsave(¤t->sigmask_lock, _flags); > flush_signals(current); > spin_unlock_irqrestore(¤t->sigmask_lock, _flags); > goto recvmsg; > } > Index: src/kernel/dlmdk/clm_main.c =================================================================== RCS file: /cvsroot/opendlm/opendlm/src/kernel/dlmdk/clm_main.c,v retrieving revision 1.1.1.1 diff -r1.1.1.1 clm_main.c 532a533,534 > #ifndef CONFIG_CLMS /* no user space argument for CLMS */ > 543a546,547 > > #endif Index: src/kernel/dlmdk/dlm_kerndd.c =================================================================== RCS file: /cvsroot/opendlm/opendlm/src/kernel/dlmdk/dlm_kerndd.c,v retrieving revision 1.1.1.1 diff -r1.1.1.1 dlm_kerndd.c 72a73,78 > /* 8 MARCH 2002 ( ane...@di...) > * Added support for kernel cluster manager > * define KERN_CLMGR in the user space and > * and CONFIG_CLMS in the kernel space for Cluster Infrastructure for Linux > */ > 104a111,125 > #ifdef CONFIG_CLMS > #include <cluster/nsc.h> > #include <cluster/icsgen.h> > #include <linux/inet.h> /* in_aton() */ > /* To avoid conflicting type with nsc.h and dlm header files*/ > #define BOOL_T_DEFINED > #define MAX_NODE_VALUE NSC_MAX_NODE_VALUE > #endif > > #ifndef MAX_NODE_VALUE > #define MAX_NODE_VALUE 10 > #endif > > > 117a139,143 > #ifdef CONFIG_CLMS > #include "dlm_clust.h" > #include "dlm_version.h" > #endif > 135a162,166 > long haDLM_max_node = 0; /* Maximum number of nodes in the > * cluster > */ > > 146a178,179 > MODULE_PARM(haDLM_max_node, "l" ); /* long */ > MODULE_PARM_DESC(haDLM_max_node, "Maximum number of nodes in the cluster. " ); 161a195 > #ifndef CONFIG_CLMS /* Write is used by user space cluster manager */ 165a200 > #endif 169a205,220 > #ifdef CONFIG_CLMS > int dlm_cli_nodeup( void *clms_handle, > int service, > clusternode_t node, > clusternode_t surrogate, > void *private) ; > int dlm_cli_nodedown( void *clms_handle, > int service, > clusternode_t node, > clusternode_t surrogate, > void *private) ; > > MQ_DLM_TOP_INFO_t * get_upnode_list(void); > int start_dlm_for_ci(void); > #endif > 181a233 > #ifndef CONFIG_CLMS /* haDLM_write is used by user space cluster manager */ 182a235 > #endif 338a392,395 > if ( haDLM_max_node == 0 ) { > haDLM_max_node = MAX_NODE_VALUE; > } > 368a426,441 > #ifdef CONFIG_CLMS > if ( (_i = start_dlm_for_ci()) < 0 ) > return _i ; > > haDLM_allow_locks = 1; > > return (register_clms_subsys("dlm", > -1, > dlm_cli_nodeup, > dlm_cli_nodedown, > NULL, > NULL, > NULL)); > > #endif > 785c858 < --- > #ifndef CONFIG_CLMS /* Write is used by user space cluster manager */ 930a1004 > #endif /* CONFIG_CLMS */ 1342a1417,1562 > > #ifdef CONFIG_CLMS > int dlm_cli_nodeup( void *clms_handle, > int service, > clusternode_t node, > clusternode_t surrogate, > void *private) > { > MQ_DLM_TOP_INFO_t *new_topo; > dlm_workunit_t *wu; > > if (!haDLM_allow_locks) { > printk("locks: not yet initialized.\n"); > clms_nodeup_callback(clms_handle, service, node); > return 0; > } > > new_topo = get_upnode_list(); > if (new_topo == NULL) > return(-ENOMEM); /* Check for the return */ > if (NULL == > (wu = kmalloc_something(sizeof(dlm_workunit_t), > "WorkUnit for topology block"))) { > return(-ENOMEM); > } > wu->data = new_topo; > wu->free_data = kfree_topo_block; > wu->type = MQ_DLM_TOP_INFO_MSG; > dlm_workqueue_put_work(dlm_master_work_queue, wu); > clms_nodeup_callback(clms_handle, service, node); > return 0; > } > > > int dlm_cli_nodedown( void *clms_handle, > int service, > clusternode_t node, > clusternode_t surrogate, > void *private) > { > MQ_DLM_TOP_INFO_t *new_topo; > dlm_workunit_t *wu; > > if (!haDLM_allow_locks) { > printk("locks: not yet initialized.\n"); > clms_nodedown_callback(clms_handle, service, node); > return 0; > } > > > new_topo = get_upnode_list(); > if (new_topo == NULL) > return(-ENOMEM); > if (NULL == > (wu = kmalloc_something(sizeof(dlm_workunit_t), > " WorkUnit for topology block"))) { > return(-ENOMEM); > } > wu->data = new_topo; > wu->free_data = kfree_topo_block; > wu->type = MQ_DLM_TOP_INFO_MSG; > dlm_workqueue_put_work(dlm_master_work_queue, wu); > clms_nodedown_callback(clms_handle, service, node); > return 0; > } > /* Getting the list of already up nodes */ > MQ_DLM_TOP_INFO_t * get_upnode_list(void) > { > MQ_DLM_TOP_INFO_t *new_topo; > int node ; > int size = (haDLM_max_node * sizeof(MQ_DLM_USEADDR_t)) + sizeof(MQ_DLM_TOP_INFO_t); > > if (NULL == > (new_topo = kmalloc_something(size, "CMGR topology block"))) { > return(NULL); > } > > new_topo->msg_version = TAM_version; > new_topo->n_nodes = haDLM_max_node; > new_topo->this_nodeid = this_node; > > /* new_topo->event ??? */ > > for(node= 0 ; node < haDLM_max_node;node++ ) { > > new_topo->addrs[node].nodeid = node; > > /* You can't use clms_isnodeup here because during NODEUP > * event clms_isnodeup on the particular node doesn't > * return 1 > */ > > if(clms_isnodedown((clusternode_t)node)) { > new_topo->addrs[node].useaddr.s_addr=0; > new_topo->addrs[node].dlm_major= 0; > new_topo->addrs[node].dlm_minor= 0; > }else { > icsinfo_t nodeinfo; > ics_geticsinfo(node, &nodeinfo); > new_topo->addrs[node].useaddr.s_addr= in_aton( (char*)&nodeinfo); > new_topo->addrs[node].dlm_major= DLM_MAJOR_VERSION; > new_topo->addrs[node].dlm_minor= DLM_MINOR_VERSION; > > } > } > return new_topo; > } > > > int start_dlm_for_ci() > { > > MQ_DLM_TOP_INFO_t *topo; > dlm_workunit_t *wu; > > printk("[%s] Starting DLM lockd\n", haDLM_name); > kproc_pid = start_lockd(0,0); /* Can use 0 only for CONFIG_CLMS */ > if (ERROR == kproc_pid) { > bsdlog(LOG_INFO, > "[%s] cannot start kernel thread.\n", > haDLM_name); > return(-ESRCH); > } else { > bsdlog(LOG_INFO, > "[%s] dlmdk kthread started, pid [%d]\n", > haDLM_name, > kproc_pid); > } > if ((topo = get_upnode_list()) == NULL ) { > return (-ENOMEM); > } > > if (NULL == > (wu = kmalloc_something(sizeof(dlm_workunit_t), > "WorkUnit for topology block"))) { > return(-ENOMEM); > } > wu->data = topo; > wu->free_data = kfree_topo_block; > wu->type = MQ_DLM_TOP_INFO_MSG; > dlm_workqueue_put_work(dlm_master_work_queue, wu); > > return 1; /* 1 for success */ > > } > #endif /* CONFIG_CLMS */ Index: src/kernel/dlmdk/dlm_workqueue.c =================================================================== RCS file: /cvsroot/opendlm/opendlm/src/kernel/dlmdk/dlm_workqueue.c,v retrieving revision 1.1.1.1 diff -r1.1.1.1 dlm_workqueue.c 29a30,35 > > /* 22MAR02 Aneesh Kumar KV ( ane...@di... ) > * Educating dlm_workqueue_get_work about SIGCLUSTER > * Thanks to Kai. > */ > 109a116 > dlm_get_work: 111a119,134 > #ifdef CONFIG_CLMS > /* This part of the code should be made more > * generic. Find out which signals are to be considered > * for doing a dlm_shutdown and if those signals > * are not delivered then go to dlm_get_work: > */ > /* HACK !!!! I know SIGCLUSTER is not for shutdown*/ > if(sigismember(¤t->pending.signal,SIGCLUSTER)) { > unsigned long _flags; > spin_lock_irqsave(¤t->sigmask_lock,_flags); > flush_signals(current); > spin_unlock_irqrestore(¤t->sigmask_lock,_flags); > goto dlm_get_work; > } > #endif > |
From: John B. <joh...@hp...> - 2002-09-25 17:16:38
|
Oops! My fault. Sorry. John Byrne David B. Zafman wrote: > > > -------- Original Message -------- > Subject: Can you forward this message to ci-linux list? > Date: Mon, 23 Sep 2002 13:18:13 -0700 > From: "Arcot Arumugam" <arc...@ho...> > To: "David B. Zafman" <dav...@hp...> > > Hi > > This message is not that important, I just wanted the ci-linux people to > know. But I did not want to subscribe to another list just for one message. > If you could forward it for me that will be great. > > thanks > > Arcot > > BEGIN POSTING > > SUBJ:cluster_start script calls the removed noded daemon > > In the latest cluster_tools package the noded daemon has been > removed. But the > cluster_start script still calls it thereby if you use the script the > nodestate will be COMINGUP > unless you manually set it to otherwise. > > Arcot > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > ci-linux-devel mailing list > ci-...@li... > https://lists.sourceforge.net/lists/listinfo/ci-linux-devel > |
From: Brian J. W. <Bri...@hp...> - 2002-09-23 21:43:33
|
> SUBJ:cluster_start script calls the removed noded daemon > > In the latest cluster_tools package the noded daemon has been > removed. But the > cluster_start script still calls it thereby if you use the script the > nodestate will be COMINGUP > unless you manually set it to otherwise. I submitted a bug for this on the CI project page. -Brian |
From: David B. Z. <dav...@hp...> - 2002-09-23 21:23:43
|
Brian has informed me that our lists are open. You don't need to be a subscriber to post. For those on these list it is good etiquette to always reply to all. -- David B. Zafman | Hewlett-Packard Company Linux Kernel Developer | Open SSI Clustering Project mailto:dav...@hp... | http://www.hp.com "Thus spake the master programmer: When you have learned to snatch the error code from the trap frame, it will be time for you to leave." |
From: David B. Z. <dav...@hp...> - 2002-09-23 21:14:03
|
-------- Original Message -------- Subject: Can you forward this message to ci-linux list? Date: Mon, 23 Sep 2002 13:18:13 -0700 From: "Arcot Arumugam" <arc...@ho...> To: "David B. Zafman" <dav...@hp...> Hi This message is not that important, I just wanted the ci-linux people to know. But I did not want to subscribe to another list just for one message. If you could forward it for me that will be great. thanks Arcot BEGIN POSTING SUBJ:cluster_start script calls the removed noded daemon In the latest cluster_tools package the noded daemon has been removed. But the cluster_start script still calls it thereby if you use the script the nodestate will be COMINGUP unless you manually set it to otherwise. Arcot |