You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(25) |
Jul
(13) |
Aug
(11) |
Sep
(14) |
Oct
(5) |
Nov
(7) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(14) |
Feb
(10) |
Mar
(30) |
Apr
(9) |
May
(20) |
Jun
(12) |
Jul
(7) |
Aug
(6) |
Sep
(6) |
Oct
(34) |
Nov
(14) |
Dec
(9) |
2003 |
Jan
|
Feb
(9) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
(14) |
Jul
(1) |
Aug
(7) |
Sep
(6) |
Oct
(5) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(7) |
Dec
(1) |
2005 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(5) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(11) |
Jun
(2) |
Jul
|
Aug
(5) |
Sep
(5) |
Oct
(1) |
Nov
(1) |
Dec
|
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Aneesh K. K.V <ane...@di...> - 2003-05-22 04:24:12
|
Brian, On Thu, 2003-05-22 at 03:07, Brian J. Watson wrote: [...snip....] You may want to wait for the changes done yesterday by laura to be merged back to the trunk before reorganising the CVS. Also cluster-tools patch from Scott also need to be not applied. It would be better if we wait for a day or so and say no more submits and then reorganise.Then we can get the reorganised RH branch and Trunk built and then open up the CVS for further submit. > > > This is how I propose laying out the CI repository: > > ci > |-- AUTHORS (was in cluster-tools) > |-- COPYING > |-- ChangeLog How are you going to organise the Changelog .I mean what about the previous contents. ? > |-- Makefile (rules for rolling releases) > |-- Makefile.am (rules for building and installing) > |-- NEWS (was in cluster-tools) > |-- README > |-- cluster-tools > | |-- Makefile.am > | |-- cmd/ (moved some commands to openssi-tools) > | |-- libcluster/ > | `-- man/ (moved some pages to openssi-tools) > |-- configure.ac (was in cluster-tools) > |-- doc/ > |-- kernel/ (was ci-linux/ci-kernel) > |-- kernel.configs > | `-- config.i586 (was in ci-linux/ci-kernel) > |-- kernel.patches (was ci-linux/3rd-party) > | |-- common/ > | `-- i386/ > `-- specs > |-- cluster-tools.spec (was in binary/rh7.3) > `-- kernel.spec > > > This is how I propose laying out the OpenSSI repository: > > openssi > |-- AUTHORS > |-- COPYING > |-- ChangeLog > |-- Makefile (rules for rolling releases) > |-- Makefile.am (rules for building and installing) > |-- NEWS > |-- README > |-- configure.ac > |-- devfsd/ (was in cluster-tools/ssi) > |-- distro-pkgs (distro-specific packages) > | |-- debian (?? not sure about Debian pkg names ??) > | | |-- init-scripts (was cluster-tools/debian-init-scripts) > | | | |-- S01devfsd > | | | |-- S10checkroot.sh > | | | |-- S35mountall.sh > | | | |-- hostname.sh > | | | |-- install-deb-initscripts (was in cluster-tools) install-deb-initscripts is not needed this job is now done by openssi_cluster_create. > | | | `-- rcSSI > | | `-- specs > | | `-- init-scripts.spec I guess init-script.spec is going to be script which will help in building debian packages. > | `-- redhat > | `-- specs/ > |-- doc > | |-- INSTALL (was in binary/rh7.3) > | |-- INSTALL.cvs > | |-- INSTALL.gfs > | |-- INSTALL.ia64 > | |-- INSTALL.pxe (was in cluster-tools) > | |-- INSTALL.source (was ssic-linux/doc/INSTALL) > | |-- README-CI > | |-- README-mosixll > | |-- README.cfs > | |-- README.hardmounts > | `-- cdsl > |-- ipvsadm/ (was cluster-tools/ha-lvs) > |-- kernel/ (was ssic-linux/ssi-kernel) > |-- kernel.configs > | |-- config.alpha (was in ssic-linux/ssi-kernel) > | |-- config.i586 (was in ssic-linux/ssi-kernel) > | |-- config.ia64 (was in ssic-linux/ssi-kernel) > | |-- config.ia642 (was in ssic-linux/ssi-kernel) > | |-- config.uml (was in ssic-linux/ssi-kernel) > | `-- kernel-2.4.18-i586-ssi.config (was in binary/rh7.3) > |-- kernel.patches (was ssic-linux/3rd-party) > | |-- common/ > | |-- i386/ > | `-- ia64/ > |-- nfs-utils/ (patched by cluster-tools/ssi/nfslock.patch) > |-- openssi-tools > | |-- Makefile.am > | |-- arch > | | |-- Makefile.am > | | |-- alpha > | | | |-- Makefile.am > | | | `-- ssi_arch.pm (was cluster-tools/ssi/ssi_alpha.pm) > | | |-- i386 > | | | |-- Makefile.am > | | | `-- ssi_arch.pm (was cluster-tools/ssi/ssi_i386.pm) > | | |-- ia64 > | | | |-- Makefile.am > | | | `-- ssi_arch.pm (was cluster-tools/ssi/ssi_ia64.pm) > | | `-- uml > | | | |-- Makefile.am > | | |-- clustertab (was in cluster-tools/ssi) > | | `-- fstab (was in cluster-tools/ssi) > | |-- distro > | | |-- Makefile.am > | | |-- debian > | | | |-- Makefile.am > | | | |-- inittab.ssi (was cluster-tools/ssi/inittab.ssi.debian) > | | | `-- ssi_distro.pm (was cluster-tools/ssi/ssi_debian.pm) > | | `-- redhat > | | |-- Makefile.am > | | |-- inittab.ssi (was cluster-tools/ssi/inittab.ssi.redhat) > | | `-- ssi_distro.pm (was cluster-tools/ssi/ssi_redhat.pm) > | |-- fs > | | |-- Makefile.am > | | |-- cfs_mount.c (was in cluster-tools/cmd) > | | |-- cfs_remount.c (was in cluster-tools/cmd) > | | |-- cfs_setroot.c (was in cluster-tools/cmd) > | | |-- ckroot.ssi (was in cluster-tools/cmd) > | | |-- cmount.c (was in cluster-tools/cmd) > | | |-- gfscf.cf (was in cluster-tools/ssi) > | | |-- mount_remote_root.c (was in cluster-tools/cmd) > | | |-- pool0.cf (was in cluster-tools/ssi) > | | `-- pool0cidev.cf (was in cluster-tools/ssi) > | |-- init.ssi/ (was cluster-tools/ssi/init) > | |-- initscripts > | | |-- Makefile.am > | | |-- rc.modules (was in cluster-tools/ssi) > | | |-- rc.nodedown (was in cluster-tools/ssi) > | | |-- rc.nodeup (was in cluster-tools/ssi) > | | |-- rc.sysinit.nodeup (was in cluster-tools/ssi) > | | `-- rc.sysrecover (was in cluster-tools/ssi) > | |-- keepalive/ (was in cluster-tools/ssi) > | |-- man > | | |-- Makefile.am > | | |-- keepalive.1M.html (was in cluster-tools/man) > | | `-- spawndaemon.1M.html (was in cluster-tools/man) > | |-- net > | | |-- Makefile.am > | | |-- node_hostname.c (was in cluster-tools/cmd) > | | `-- setport_weight.c (was in cluster-tools/cmd) > | |-- proc > | | |-- Makefile.am > | | |-- load_level_com.h (was in cluster-tools/cmd) > | | |-- loadlevel.c (was in cluster-tools/cmd) > | | |-- loads.c (was in cluster-tools/cmd) > | | |-- migrate.c (was in cluster-tools/ssi/migrate) > | | |-- onall.c (was in cluster-tools/cmd) > | | |-- onnode.c (was in cluster-tools/cmd) > | | `-- where_pid.c (was in cluster-tools/cmd) > | `-- sysadmin > | |-- Clustertab.pm (was in cluster-tools/ssi) > | |-- Makefile.am > | |-- addnode (was in cluster-tools/ssi) > | |-- addnode.dev (was in cluster-tools/ssi) > | Can't we drop addnode and addnode.dev and use openssi_cluster_create and openssi_addnode > |-- addnode.pm (was in cluster-tools/ssi) > | |-- chnode (was in cluster-tools/ssi) > | |-- cluster_lilo (was in cluster-tools/ssi) > | |-- cluster_mkinitrd (was in cluster-tools/ssi) > | |-- mkdhcpd.conf (was in cluster-tools/ssi) > | |-- openssi_addnode (was in cluster-tools/ssi) > | `-- openssi_cluster_create (was in cluster-tools/ssi) > |-- specs > | |-- devfsd.spec > | |-- ipvsadm.spec > | |-- kernel.spec (was binary/rh7.3/kernel-2.4.spec) > | |-- nfs-utils.spec > | |-- openssi-tools.spec > | `-- util-linux.spec > `-- util-linux/ (was in cluster-tools/ssi) > > > Let me know what you think, Great!!!!. -aneesh |
From: Brian J. W. <Bri...@hp...> - 2003-05-21 21:38:12
|
To the CI and OpenSSI communities- The CI and OpenSSI user-mode code has been thrown together in an ad hoc fashion since the beginning of these projects. It has been lumped together into one RPM, source release, and CVS module that supports two projects, two Linux distros, and three architectures. Its structure is inconsistent and confusing to novice users and developers, forbids the replacement of commands owned by base RPMs, and encumbers the production of releases. I propose the following changes to clean-up things: Binary release changes ---------------------- * roughly separate cluster-tools into its CI and OpenSSI components * roll just the CI part as the cluster-tools RPM * spin off the OpenSSI part as an openssi-tools RPM and a set of enhanced base RPMs, such as mount (util-linux), devfsd, nfs-utils, and ipvsadm (ha-lvs) * include kernel-openssi, cluster-tools, openssi-tools, the enhanced base packages, documentation, and a simple installation script in a new OpenSSI binary release * eventually roll a CI binary release containing kernel-ci and cluster-tools * libcluster should retain OpenSSI support in the reduced cluster-tools, since it's not worth the effort to separate this code Source release changes ---------------------- * combine the stripped-down cluster-tools with the CI kernel code to make a new CI source release * continue to include combined CI and OpenSSI code in OpenSSI releases * eliminate Cluster Tools as a separate source release * organize the CI and OpenSSI source trees to reflect the binary packages * organize the large number of commands for openssi-tools into functional groups CVS repository changes ---------------------- * reorganize the repository to match the new source trees * adjust the system of Makefiles to reflect the new source organization * exclusively adopt Aneesh's GNU build system, so that only one set of Makefiles needs to be reorg'd * use a consistent system for supporting multiple distributions and architectures * write scripts for automatically producing source and binary releases This is how I propose laying out the CI repository: ci |-- AUTHORS (was in cluster-tools) |-- COPYING |-- ChangeLog |-- Makefile (rules for rolling releases) |-- Makefile.am (rules for building and installing) |-- NEWS (was in cluster-tools) |-- README |-- cluster-tools | |-- Makefile.am | |-- cmd/ (moved some commands to openssi-tools) | |-- libcluster/ | `-- man/ (moved some pages to openssi-tools) |-- configure.ac (was in cluster-tools) |-- doc/ |-- kernel/ (was ci-linux/ci-kernel) |-- kernel.configs | `-- config.i586 (was in ci-linux/ci-kernel) |-- kernel.patches (was ci-linux/3rd-party) | |-- common/ | `-- i386/ `-- specs |-- cluster-tools.spec (was in binary/rh7.3) `-- kernel.spec This is how I propose laying out the OpenSSI repository: openssi |-- AUTHORS |-- COPYING |-- ChangeLog |-- Makefile (rules for rolling releases) |-- Makefile.am (rules for building and installing) |-- NEWS |-- README |-- configure.ac |-- devfsd/ (was in cluster-tools/ssi) |-- distro-pkgs (distro-specific packages) | |-- debian (?? not sure about Debian pkg names ??) | | |-- init-scripts (was cluster-tools/debian-init-scripts) | | | |-- S01devfsd | | | |-- S10checkroot.sh | | | |-- S35mountall.sh | | | |-- hostname.sh | | | |-- install-deb-initscripts (was in cluster-tools) | | | `-- rcSSI | | `-- specs | | `-- init-scripts.spec | `-- redhat | `-- specs/ |-- doc | |-- INSTALL (was in binary/rh7.3) | |-- INSTALL.cvs | |-- INSTALL.gfs | |-- INSTALL.ia64 | |-- INSTALL.pxe (was in cluster-tools) | |-- INSTALL.source (was ssic-linux/doc/INSTALL) | |-- README-CI | |-- README-mosixll | |-- README.cfs | |-- README.hardmounts | `-- cdsl |-- ipvsadm/ (was cluster-tools/ha-lvs) |-- kernel/ (was ssic-linux/ssi-kernel) |-- kernel.configs | |-- config.alpha (was in ssic-linux/ssi-kernel) | |-- config.i586 (was in ssic-linux/ssi-kernel) | |-- config.ia64 (was in ssic-linux/ssi-kernel) | |-- config.ia642 (was in ssic-linux/ssi-kernel) | |-- config.uml (was in ssic-linux/ssi-kernel) | `-- kernel-2.4.18-i586-ssi.config (was in binary/rh7.3) |-- kernel.patches (was ssic-linux/3rd-party) | |-- common/ | |-- i386/ | `-- ia64/ |-- nfs-utils/ (patched by cluster-tools/ssi/nfslock.patch) |-- openssi-tools | |-- Makefile.am | |-- arch | | |-- Makefile.am | | |-- alpha | | | |-- Makefile.am | | | `-- ssi_arch.pm (was cluster-tools/ssi/ssi_alpha.pm) | | |-- i386 | | | |-- Makefile.am | | | `-- ssi_arch.pm (was cluster-tools/ssi/ssi_i386.pm) | | |-- ia64 | | | |-- Makefile.am | | | `-- ssi_arch.pm (was cluster-tools/ssi/ssi_ia64.pm) | | `-- uml | | | |-- Makefile.am | | |-- clustertab (was in cluster-tools/ssi) | | `-- fstab (was in cluster-tools/ssi) | |-- distro | | |-- Makefile.am | | |-- debian | | | |-- Makefile.am | | | |-- inittab.ssi (was cluster-tools/ssi/inittab.ssi.debian) | | | `-- ssi_distro.pm (was cluster-tools/ssi/ssi_debian.pm) | | `-- redhat | | |-- Makefile.am | | |-- inittab.ssi (was cluster-tools/ssi/inittab.ssi.redhat) | | `-- ssi_distro.pm (was cluster-tools/ssi/ssi_redhat.pm) | |-- fs | | |-- Makefile.am | | |-- cfs_mount.c (was in cluster-tools/cmd) | | |-- cfs_remount.c (was in cluster-tools/cmd) | | |-- cfs_setroot.c (was in cluster-tools/cmd) | | |-- ckroot.ssi (was in cluster-tools/cmd) | | |-- cmount.c (was in cluster-tools/cmd) | | |-- gfscf.cf (was in cluster-tools/ssi) | | |-- mount_remote_root.c (was in cluster-tools/cmd) | | |-- pool0.cf (was in cluster-tools/ssi) | | `-- pool0cidev.cf (was in cluster-tools/ssi) | |-- init.ssi/ (was cluster-tools/ssi/init) | |-- initscripts | | |-- Makefile.am | | |-- rc.modules (was in cluster-tools/ssi) | | |-- rc.nodedown (was in cluster-tools/ssi) | | |-- rc.nodeup (was in cluster-tools/ssi) | | |-- rc.sysinit.nodeup (was in cluster-tools/ssi) | | `-- rc.sysrecover (was in cluster-tools/ssi) | |-- keepalive/ (was in cluster-tools/ssi) | |-- man | | |-- Makefile.am | | |-- keepalive.1M.html (was in cluster-tools/man) | | `-- spawndaemon.1M.html (was in cluster-tools/man) | |-- net | | |-- Makefile.am | | |-- node_hostname.c (was in cluster-tools/cmd) | | `-- setport_weight.c (was in cluster-tools/cmd) | |-- proc | | |-- Makefile.am | | |-- load_level_com.h (was in cluster-tools/cmd) | | |-- loadlevel.c (was in cluster-tools/cmd) | | |-- loads.c (was in cluster-tools/cmd) | | |-- migrate.c (was in cluster-tools/ssi/migrate) | | |-- onall.c (was in cluster-tools/cmd) | | |-- onnode.c (was in cluster-tools/cmd) | | `-- where_pid.c (was in cluster-tools/cmd) | `-- sysadmin | |-- Clustertab.pm (was in cluster-tools/ssi) | |-- Makefile.am | |-- addnode (was in cluster-tools/ssi) | |-- addnode.dev (was in cluster-tools/ssi) | |-- addnode.pm (was in cluster-tools/ssi) | |-- chnode (was in cluster-tools/ssi) | |-- cluster_lilo (was in cluster-tools/ssi) | |-- cluster_mkinitrd (was in cluster-tools/ssi) | |-- mkdhcpd.conf (was in cluster-tools/ssi) | |-- openssi_addnode (was in cluster-tools/ssi) | `-- openssi_cluster_create (was in cluster-tools/ssi) |-- specs | |-- devfsd.spec | |-- ipvsadm.spec | |-- kernel.spec (was binary/rh7.3/kernel-2.4.spec) | |-- nfs-utils.spec | |-- openssi-tools.spec | `-- util-linux.spec `-- util-linux/ (was in cluster-tools/ssi) Let me know what you think, Brian |
From: Aneesh K. K.V <ane...@di...> - 2003-04-15 03:04:10
|
On Tue, 2003-04-15 at 02:19, Brian J. Watson wrote: > Russ Gritzo wrote: > > I am trying out the CI software to see if it will work in a losely > > coupled cluster setup I am building. Using > > ci-linux-2.4.18-v0.7.6 and cluster-tools-0.7.6. > > > > [snip] > > > > Installed the cluster-tools ssi components spawndaemon and keepalive. > > Sorry for the long delay. The ssi components of cluster-tools (including > spawndaemon and keepalive) are only intended for use with the OpenSSI > kernel (openssi.org). Both CI and OpenSSI are maintained by the same > group of developers. > > Unfortunately, there are also a few commands outside the ssi/ directory > which are only intended for use with OpenSSI (e.g., loadlevel, loads, > onall, onnode, etc.). > > Only libcluster and the cluster* commands (and maybe ha-lvs, Aneesh?) > are meant to be used with CI. > No ha-lvs need a cluster wide root ( /etc ) for finding config files. It also uses /etc/lvs.VIP.active for informing other nodes regarding the master director node for a CVIP. It uses vproc chan for registering cluster service.I have also kept the files under linux/cluster/ssi/net . So one will also have to play with Makefiles to get it build. That makes it a bit difficult. -aneesh |
From: Brian J. W. <Bri...@hp...> - 2003-04-14 20:51:30
|
Russ Gritzo wrote: > I am trying out the CI software to see if it will work in a losely > coupled cluster setup I am building. Using > ci-linux-2.4.18-v0.7.6 and cluster-tools-0.7.6. > > [snip] > > Installed the cluster-tools ssi components spawndaemon and keepalive. Sorry for the long delay. The ssi components of cluster-tools (including spawndaemon and keepalive) are only intended for use with the OpenSSI kernel (openssi.org). Both CI and OpenSSI are maintained by the same group of developers. Unfortunately, there are also a few commands outside the ssi/ directory which are only intended for use with OpenSSI (e.g., loadlevel, loads, onall, onnode, etc.). Only libcluster and the cluster* commands (and maybe ha-lvs, Aneesh?) are meant to be used with CI. Sorry about the confusion, Brian |
From: Mike Y. <my...@wi...> - 2003-03-17 23:25:43
|
Has anyone gotten kernel v2.4.18 successfully patched with both XFS v1.1 and with SSI v0.95 support? Thanks, Mike |
From: Russ G. <gr...@la...> - 2003-03-06 00:46:43
|
Sorry for more confusion, but I am having some trouble getting some things working. I am trying out the CI software to see if it will work in a losely coupled cluster setup I am building. Using ci-linux-2.4.18-v0.7.6 and cluster-tools-0.7.6. I have two nodes (for now), identical, both have fresh RH7.3 installs and fresh 2.4.18 patched kernels. The CI kernel patches are installed, and seem to be working. cluster -v gives: [root@b root]# cluster -V Node 1: State: UP Previous state: COMINGUP Reason for last transition: API Last transition ID: 4 Last transition time: Wed Mar 5 16:49:45.654512 2003 First transition ID: 3 First transition time: Wed Mar 5 16:49:45.604512 2003 Number of CPUs: 1 Number of CPUs online: 1 Node 2: State: UP Previous state: COMINGUP Reason for last transition: API Last transition ID: 2 Last transition time: Wed Mar 5 16:49:37.174512 2003 First transition ID: 1 First transition time: Wed Mar 5 16:49:37.104512 2003 Number of CPUs: 1 Number of CPUs online: 1 [root@b root]# Installed the cluster-tools ssi components spawndaemon and keepalive. Here come the questions: At first, the /dev/keepalivecfg pipe was not created as part of install, so I made it by hand. Also, I had to add the keepalive section into the inittab by hand as well. Did I miss something in the install? Now after a reboot, the CI stuff seems to be working ok, but an attempt to use the spawndaemon leaves the following log entry: Mar 5 17:06:00 b spawndaemon[1051]: spawndaemon: Could not open pipe /dev/keepalivecfg. Keepalive is not active. Retrying ... I have to find the running keepalive and kill it, let init restart it then it seems to be able to open the pipe. Is this a problem in how I am starting keepalive? And finally, does keepalive run on each node, or only once on the cluster? Currently I am running keepalive on each node. If I run spawndaemon on node 1 and try and register a daemon to run on node 2, the keepalive on 1 registers a failure to start the daemon but it indeed does start, but on node 1. node 2 seems oblibious to the whole thing. Hmmmm. [root@b log]# spawndaemon -L -v human keepalive running: TRUE quiesce flag: FALSE pid: 1058 node number: -1 registered processes: 0 table size: 200 max. possible processes: 200 polling: FALSE polling interval: 5 primary node: None secondary node: None [root@b log]# It seems incorrect that the node number is -1.... Have I missed something fundamental in the configuration, or am I off in some other variable space? TIA, r. |
From: Brian J. W. <Bri...@hp...> - 2003-02-14 19:10:32
|
Resending this with a sensible subject line. -BW "Brian J. Watson" wrote: > > Frank Mayhar wrote: > > > > eli...@ao... wrote: > > [ Spam deleted ] > > > > Maybe this should be a members-only list? Looks like the address has been > > harvested. > > It would be nice to completely block spammers from this list, but I > think Bruce would like to keep the list open to users who don't want to > subscribe. I discovered that when I make it a members-only list, non-members aren't blocked -- they're merely moderated. Bruce and I are okay with this. I've made ci-linux-devel, ssic-linux-devel, and ssic-linux-users into "members-only" lists. Anyone can still post, but non-members will be delayed until I have a chance to approve them. -- Brian Watson OpenSSI Clustering project OpenSSI.org ------------------------------------------------------- This SF.NET email is sponsored by: FREE SSL Guide from Thawte are you planning your Web Server Security? Click here to get a FREE Thawte SSL guide and find the answers to all your SSL security issues. http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en _______________________________________________ Ssic-linux-users mailing list Ssi...@li... https://lists.sourceforge.net/lists/listinfo/ssic-linux-users |
From: Brian J. W. <Bri...@hp...> - 2003-02-14 02:17:55
|
"Brian J. Watson" wrote: > > Frank Mayhar wrote: > > > > eli...@ao... wrote: > > [ Spam deleted ] > > > > Maybe this should be a members-only list? Looks like the address has been > > harvested. > > It would be nice to completely block spammers from this list, but I > think Bruce would like to keep the list open to users who don't want to > subscribe. I discovered that when I make it a members-only list, non-members aren't blocked -- they're merely moderated. Bruce and I are okay with this. I've made ci-linux-devel, ssic-linux-devel, and ssic-linux-users into "members-only" lists. Anyone can still post, but non-members will be delayed until I have a chance to approve them. -- Brian Watson OpenSSI Clustering project OpenSSI.org |
From: Brian J. W. <Bri...@hp...> - 2003-02-14 00:58:13
|
Frank Mayhar wrote: > > eli...@ao... wrote: > [ Spam deleted ] > > Maybe this should be a members-only list? Looks like the address has been > harvested. It would be nice to completely block spammers from this list, but I think Bruce would like to keep the list open to users who don't want to subscribe. Brian |
From: <san...@ao...> - 2003-02-10 11:11:13
|
Y2ktbGludXgtZGV2ZWwsIFByb3RlY3QgWW91ciBDb21wdXRlciBGcm9tIFVu d2FudGVkIEFuZCBIYXphcmRvdXMgVmlydXNlcyEhDQoNCk5vcnRvbiBTeXN0 ZW1Xb3JrcyAyMDAzIFNvZnR3YXJlIFN1aXRlIChQcm9tby1TSk0yMDUpDQoN CkFMTCBGSVZFIC0gRmVhdHVyZS1QYWNrZWQgVXRpbGl0aWVzIEZvciBPbmx5 ICQzOS45OQ0KDQohIC0gTm9ydG9uIEFudGlWaXJ1cyAyMDAzCQ0KISAtIE5v cnRvbiBHaG9zdCAyMDAzDQohIC0gR29CYWNrIDMgUGVyc29uYWwgRWRpdGlv bgkNCiEgLSBOb3J0b24gVXRpbGl0aWVzIDIwMDMNCiEgLSBOb3J0b24gQ2xl YW5Td2VlcCAyMDAzDQoNCk9ubHkgJDM5Ljk5ISAoU2hpcHBpbmcgaXMgaW5j bHVkZWQhKQ0KDQpPcmRlciBZb3VycyBUb2RheSEgDQpodHRwOi8vYW50aXZp cnVzc2l0ZS5jb20vbnN3MzguaHRtP0FOVEl2aXJ1cz1GQ1ZRVllLDQoNCklm IHlvdSBkbyBub3Qgd2lzaCB0byByZWNlaXZlIGZ1cnRoZXIgbWFpbGluZ3Ms IHBsZWFzZSB2aXNpdCANCnRoZSBsaW5rIGJlbG93LiBXZSBob25vciBhbGwg cmVtb3ZhbCByZXF1ZXN0cy4gU3VibWl0IHlvdXIgcmVxdWVzdCBhdDoNCmh0 dHA6Ly9hbnRpdmlydXNzaXRlLmNvbS9nb29kYnllLmh0bWw/aWQ9MldoDQoz NzI4ZE9PczgtMTMyaVR4YTgzOTJ3Z1FEbDI0 |
From: <ann...@ao...> - 2003-02-06 06:20:30
|
Y2ktbGludXgtZGV2ZWwsIFByb3RlY3QgWW91ciBDb21wdXRlciBGcm9tIFVu d2FudGVkIEFuZCBIYXphcmRvdXMgVmlydXNlcyEhDQoNCk5vcnRvbiBTeXN0 ZW1Xb3JrcyAyMDAzIFNvZnR3YXJlIFN1aXRlIChQcm9tby1KT0Q2MTQpDQoN CkFMTCBGSVZFIC0gRmVhdHVyZS1QYWNrZWQgVXRpbGl0aWVzIEZvciBPbmx5 ICQzOS45OQ0KDQohIC0gTm9ydG9uIEFudGlWaXJ1cyAyMDAzCQ0KISAtIE5v cnRvbiBHaG9zdCAyMDAzDQohIC0gR29CYWNrIDMgUGVyc29uYWwgRWRpdGlv bgkNCiEgLSBOb3J0b24gVXRpbGl0aWVzIDIwMDMNCiEgLSBOb3J0b24gQ2xl YW5Td2VlcCAyMDAzDQoNCk9ubHkgJDM5Ljk5ISAoU2hpcHBpbmcgaXMgaW5j bHVkZWQhKQ0KDQpPcmRlciBZb3VycyBUb2RheSEgDQpodHRwOi8vYW50aXZp cnVzc2l0ZS5jb20vbnN3MzguaHRtP0FOVEl2aXJ1cz1OVA0KDQpJZiB5b3Ug ZG8gbm90IHdpc2ggdG8gcmVjZWl2ZSBmdXJ0aGVyIG1haWxpbmdzLCBwbGVh c2UgdmlzaXQgDQp0aGUgbGluayBiZWxvdy4gV2UgaG9ub3IgYWxsIHJlbW92 YWwgcmVxdWVzdHMuIFN1Ym1pdCB5b3VyIHJlcXVlc3QgYXQ6DQpodHRwOi8v YW50aXZpcnVzc2l0ZS5jb20vZ29vZGJ5ZS5odG1sP2lkPVBhTw0KMDk3OVJ6 RGQ5LTE5NUFNRlY2MzM2Z0diUzAtNDYwd2wyOQ== |
From: Frank M. <fr...@ex...> - 2003-02-05 16:02:03
|
eli...@ao... wrote: [ Spam deleted ] Maybe this should be a members-only list? Looks like the address has been harvested. -- Frank Mayhar fr...@ex... http://www.exit.com/ Exit Consulting http://www.gpsclock.com/ |
From: <eli...@ao...> - 2003-02-05 12:00:43
|
Y2ktbGludXgtZGV2ZWwsIFByb3RlY3QgWW91ciBDb21wdXRlciBGcm9tIFVu d2FudGVkIEFuZCBIYXphcmRvdXMgVmlydXNlcyEhDQoNCk5vcnRvbiBTeXN0 ZW1Xb3JrcyAyMDAzIFNvZnR3YXJlIFN1aXRlIChQcm9tby1WUzUxNkVSKQ0K DQpBTEwgRklWRSAtIEZlYXR1cmUtUGFja2VkIFV0aWxpdGllcyBGb3IgT25s eSAkMzkuOTkNCg0KISAtIE5vcnRvbiBBbnRpVmlydXMgMjAwMwkNCiEgLSBO b3J0b24gR2hvc3QgMjAwMw0KISAtIEdvQmFjayAzIFBlcnNvbmFsIEVkaXRp b24JDQohIC0gTm9ydG9uIFV0aWxpdGllcyAyMDAzDQohIC0gTm9ydG9uIENs ZWFuU3dlZXAgMjAwMw0KDQpPbmx5ICQzOS45OSEgKFNoaXBwaW5nIGlzIGlu Y2x1ZGVkISkNCg0KT3JkZXIgWW91cnMgVG9kYXkhIA0KaHR0cDovL2FudGl2 aXJ1c3NpdGUuY29tL25zdzM4Lmh0bT9BTlRJdmlydXM9R1gwUkZCDQoNCklm IHlvdSBkbyBub3Qgd2lzaCB0byByZWNlaXZlIGZ1cnRoZXIgbWFpbGluZ3Ms IHBsZWFzZSB2aXNpdCANCnRoZSBsaW5rIGJlbG93LiBXZSBob25vciBhbGwg cmVtb3ZhbCByZXF1ZXN0cy4gU3VibWl0IHlvdXIgcmVxdWVzdCBhdDoNCmh0 dHA6Ly9hbnRpdmlydXNzaXRlLmNvbS9nb29kYnllLmh0bWw/aWQ9MzgxMg0K MjY1M1JXbDY= |
From: donald o. <do...@ho...> - 2003-02-04 14:14:01
|
I am Donald Opia Bank Manager of Union Bank of Nigeria=2C Lagos Branch=2EI have urgent and very confidential business proposition for you=2E On August 6=2C1999=2C a FOREIGN Oil consultant=2Fcontractor with the Nigerian National Petroleum Corporation=2C Mr=2E Barry Kelly made a numbered time =28Fixed=29 Deposit for twelve calendar months=2C valued at US$8=2C000=2C000=2E00 =28eight Million Dollars=29 in my branch=2E Upon maturity=2C I sent a routine notification to his forwarding address but got no reply=2E After a month=2C we sent a reminder and finally we discovered from his contract employers=2C the Nigerian National Petroleum Corporation that Mr=2E Barry Kelly died from an automobile accident=2EOn further investigation=2C I found out that he died without making a WILL=2C and all attempts to trace his next of kin was fruitless=2E I therefore made further investigation and discovered that Mr=2E Barry Kelly did not declare any kin or relations in all his official documents=2C including his Bank Deposit paperwork in my Bank=2E This sum of US$8=2C000=2C000=2E00 is still sitting in my Bank and thinterest is being rolled over with the principal sum at the end of each year=2E No one will ever come forward to claim it=2E According to Nigerian Law=2Cat the expiration of 5 =28five=29 years=2C the money will revert to the ownership of the Nigerian Government if nobody applies to claim the fund=2E Consequently=2C my proposal is that I will like you as an Foreigner to stand in as the next of kin to Mr=2E Barry Kelly so that the fruits of this old man's labour will not get into the hands of some corrupt government officials=2E This is simple=2C I will like you to provide immediately your full names and address so that the Attorney will prepare the necessary documents and affidavits=2C which will put you in place as the next of kin=2E We shall employ the service of an Attorney to obtain the necessary documents and letter of probate=2Fadministration in your favor for the transfer=2E A bank account in any part of the world=2C which you will provide=2C will then facilitate the transfer of this money to you as the beneficiary=2Fnext of kin=2EThe money will be paid into your account for us to share in the ratio of 80% for me and 20% for you=2E There is no risk at all as all the paperwork for this transaction will be done by the Attorney and my position as the Branch Manager guarantees the successful execution of this transaction=2E If you are interested=2C please reply immediately via the private email address below=2E Upon your response=2C I shall then provide you with more details and relevant documents that will help you understand the transaction=2E Please observe utmost confidentiality=2C and rest assured that this transaction would be most profitable for both of us because I shall require your assistance to invest my share in your country=2E Awaiting your urgent reply via my email=2E Thanks and regards=2E MR=2EDonald Opia |
From: Brian J. W. <Bri...@hp...> - 2002-12-24 20:38:16
|
"Brian J. Watson" wrote: > > Cluster signal numbers have been moved to resolve a conflict with > libpthread. Cluster system call numbers have been moved to avoid numbers > claimed by the Red Hat kernel. For these reasons, you must upgrade both > CI and Cluster Tools to 0.7.6. If you're using SSI, you must also > upgrade it to 0.7.6. > > Cluster Tools also includes an experimental GNU-style build system done > by Aneesh Kumar. It will eventually replace the existing build system. As always, you can find these releases at ci-linux.sf.net. -Brian |
From: Brian J. W. <Bri...@hp...> - 2002-12-24 20:36:53
|
Cluster signal numbers have been moved to resolve a conflict with libpthread. Cluster system call numbers have been moved to avoid numbers claimed by the Red Hat kernel. For these reasons, you must upgrade both CI and Cluster Tools to 0.7.6. If you're using SSI, you must also upgrade it to 0.7.6. Cluster Tools also includes an experimental GNU-style build system done by Aneesh Kumar. It will eventually replace the existing build system. -Brian |
From: Brian J. W. <Bri...@hp...> - 2002-12-14 02:38:18
|
"Aneesh Kumar K.V" wrote: > > Hi, > > The released tar ball of CI doesn't work for openDLM. Are you planning > for a new release ? I just posted a 0.7.5 release of CI based on the latest top-of-tree. It's available on the web page and the SF project page. I'll announce on Freshmeat and update the web documentation on Monday. -Brian |
From: Brian J. W. <Bri...@hp...> - 2002-12-13 02:11:14
|
> I guess we need to have fstab.ssi . Otherwise user is going to be more > confused. My thinking is that the user may not know about fstab.ssi. They'll make a change to fstab and be frustrated that their change isn't having any effect. I have a similar problem with my cluster_lilo command. It should be called lilo, and behave like the base lilo when running on a non-ssi kernel. > I am attaching below a small perl script that will auto > generate the needed fstab.ssi. Run it on the first node. Cool! Feel free to check it in with your util-linux stuff so people can play with it. Also, can you briefly document our changes to fstab and how to use the script to automate the changes. Feel free to check that description in, as well. Thanks, Brian |
From: Aneesh K. K.V <ane...@di...> - 2002-12-09 09:06:12
|
On Thu, 2002-12-05 at 06:35, Brian J. Watson wrote: > > Also there > > is some difference in the option supported by swapon ( -e ) between what > > i found in redhat and the mount code base( from debian ). So redhat > > startup script may need a close check. > <snip> > > Also, I think we want to have the commands use /etc/fstab, not > /etc/fstab.ssi. That would be less surprising to a new user. IIUC, no > modifications need to be made to /etc/fstab for the first node. Perhaps > the original /etc/fstab can be saved when addnode or chnode is first > used to add entries for other nodes. It can then be restored if > mount-ssi and e2fsprogs-ssi are uninstalled. > I guess we need to have fstab.ssi . Otherwise user is going to be more confused. I am attaching below a small perl script that will auto generate the needed fstab.ssi. Run it on the first node. One can also import the already existing fstab( new fstab with entires node=x will be auto generated ). The output file name is taken as the argument. If one is not importing already existing fstab ( which will be case for other nodes ) the script will ask certain set of questions depending on the hardware configuration and file system supported. ( I am reading this from /proc ). Hope the script will be useful. -aneesh #! /usr/bin/perl -w # # # Date : Dec 09 2002 # Authors: Aneesh Kumar K.V ( ane...@di... ) # # # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE # or NON INFRINGEMENT. See the GNU General Public License for more # details. # # use strict; sub generate_fstab_ssi { my($node_num,$fstab,$fstab_out) = @_; my($input_line); my(@line_entry); my($i); open(INFILE,$fstab) || die ("Unable to open the file $fstab"); open(OUTFILE,">>$fstab_out") || die ("Unable to open the file $fstab_out"); while(! eof(INFILE) ) { $input_line = <INFILE>; @line_entry = split(/[\t ]+/ ,$input_line); if ($line_entry[0] =~ /^#/ ) { print OUTFILE $input_line; next ; } if (@line_entry != 6 ) { next; } $i = 0; while( $i < @line_entry ) { if( $i == 3) { if ($line_entry[2] =~ /proc/ || $line_entry[2] =~ /devfs/ ) { print OUTFILE $line_entry[$i].",node=*\t"; } else { print OUTFILE $line_entry[$i].",node=$node_num\t"; } } else { print OUTFILE "$line_entry[$i]\t"; } $i++; } print OUTFILE "\n"; } close(INFILE); close(OUTFILE); } sub partition_print { my(@ex_list) = @_; my($input_line); my(@line_entry); my($ex); open(PROC_PARTITION,"/proc/partitions") || die( "/proc/partitions not found"); # Skip the starting line $input_line = <PROC_PARTITION>; print "\nAvailable partions are:\n"; read_proc: while(! eof(PROC_PARTITION) ) { $input_line = <PROC_PARTITION>; @line_entry = split(/[\t ]+/ ,$input_line); if (@line_entry == 5) { chop($line_entry[4]); if( $line_entry[4] =~ /[0-9]$/ ) { foreach $ex (@ex_list) { if( $ex eq $line_entry[4]) { next read_proc; } } print "$line_entry[4]\n"; } } } close(PROC_PARTITION); } sub filesystem_print { my($input_line); my(@line_entry); open(PROC_FILESYSTEM,"/proc/filesystems") || die( "/proc/filesystems not found"); print "\nAvailable filesystems are:\n"; print "swap\n"; while(! eof(PROC_FILESYSTEM) ) { $input_line = <PROC_FILESYSTEM>; @line_entry = split(/[\t ]+/ ,$input_line); if ($line_entry[0] ne "nodev" ) { print @line_entry; } } close(PROC_FILESYSTEM); } sub write_fstab_entry { my($partition_name,$file_system,$node_num,$fstab_out)=@_; my($mount_point,$create_mount); open(OUTFILE,">>$fstab_out") || die ( "Unable to open $fstab_out"); if( $file_system eq "swap" ) { print OUTFILE "/dev/$partition_name\tnone\t$file_system\t"; print OUTFILE "sw,node=$node_num\t\t0\t0\n"; }else { print "Enter the mount point for /dev/$partition_name :"; $mount_point = <STDIN>; chop($mount_point); unless ( -d $mount_point ) { print "\n"; print "$mount_point not found "; print "Shall I create it ?[y/n] :"; $create_mount = <STDIN>; chop($create_mount); if ( $create_mount eq "y") { mkdir($mount_point,0755); }else { print "Mount point not created !!!!\n"; } } print OUTFILE "/dev/$partition_name\t$mount_point\t$file_system\t"; print OUTFILE "defaults,node=$node_num\t\t0\t2\n"; } close (OUTFILE); } sub usage() { print "\n"; print "Usage:\n"; print "perl $0 <out_file_name>"; print "\n"; } sub main { my($node_num); my($import_fstab,@ex_list,$partition_name,$more,$file_system); $node_num = `/sbin/clusternode_num`; chop($node_num); # Anything other than digit if ($node_num =~ /\D/) { die(" This Program need to run on a SSI cluster!!!!"); } if (@ARGV < 1 ) { usage(); exit(); } print "Cluster fstab generation script\n"; if ( -e "/etc/fstab" ){ print "Found /etc/fstab Do you want to import[y/n]:"; $import_fstab = <STDIN>; chop($import_fstab); if ( $import_fstab eq "y" ) { generate_fstab_ssi($node_num,"/etc/fstab",$ARGV[0]); return; } } while (1) { partition_print(@ex_list); print "Select a partition : "; $partition_name = <STDIN>; chop($partition_name); filesystem_print(); print "Select the file system on "; print "the partion /dev/$partition_name :"; $file_system =<STDIN>; chop($file_system); write_fstab_entry($partition_name,$file_system, $node_num,$ARGV[0]); @ex_list = (@ex_list,$partition_name); print "Add another entry[y/n]:"; $more = <STDIN>; chop($more); if ($more ne "y" ) { last; } } } main |
From: <isa...@ao...> - 2002-12-05 14:36:31
|
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlw ZSIgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXdpbmRvd3MtMTI1MiI+ DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1IiIGNvbnRlbnQ9Ik1pY3Jvc29mdCBG cm9udFBhZ2UgNC4wIj4NCjxtZXRhIG5hbWU9IlByb2dJZCIgY29udGVudD0i RnJvbnRQYWdlLkVkaXRvci5Eb2N1bWVudCI+DQo8dGl0bGU+Y2ktbGludXgt ZGV2ZWw8L3RpdGxlPg0KPC9oZWFkPg0KDQo8Ym9keT4NCjwhLS0gNTA2M3dX Q1ExLTQ1M0tNdFQ5ODYybVhqZjAtNTY2VUZvdzg2NjJmeU51OS0yNTdkUHJI MDE1M1l1aGkyLTk2N3hPckEwbDY1IC0tPg0KPHRhYmxlIGJvcmRlcj0iMSIg d2lkdGg9Ijc1JSIgY2VsbHNwYWNpbmc9IjAiIGNlbGxwYWRkaW5nPSIwIiBi b3JkZXJjb2xvcj0iIzAwMDBmZiI+DQogIDx0cj4NCiAgICA8dGQgd2lkdGg9 IjEwMCUiIHN0eWxlPSJCQUNLR1JPVU5ELUNPTE9SOiAjMDAwMDAwIiBhbGln bj0ibWlkZGxlIj48Zm9udCBmYWNlPSJBcmlhbCBCbGFjayIgY29sb3I9IiNm ZmZmMDAiPiQyOS45OQ0KICAgICAgLSAqTkVXLVNwZWNpYWwgUGFja2FnZSBE ZWFsISogLSAkMjkuOTk8L2ZvbnQ+PC90ZD4NCiAgPC90cj4NCiAgPHRyPg0K ICAgIDx0ZCB3aWR0aD0iMTAwJSI+DQogICAgICA8cCBhbGlnbj0iY2VudGVy IiBzdHlsZT0iQkFDS0dST1VORC1DT0xPUjogI2ZmZmYwMCI+PGI+PGZvbnQg ZmFjZT0iQXJpYWwiIGNvbG9yPSIjMDAwMGZmIj4yMDAzIE1jQWZlZSBWZXJz aW9uIDcuMCBTb2Z0d2FyZSBTdWl0ZSAtIEhvbWUNCiAgICAgIEVkaXRpb24g LTxicj4NCiAgICAgIFRoZSBCRVNUICEgaW4gVmlydXMgUHJvdGVjdGlvbjwv Zm9udD48L2I+PC9wPjwvdGQ+DQogIDwvdHI+DQogIDx0cj4NCiAgICA8dGQg d2lkdGg9IjEwMCUiIHN0eWxlPSJCQUNLR1JPVU5ELUNPTE9SOiAjZmZmZmZm Ij4NCiAgICAgIDxwIGFsaWduPSJjZW50ZXIiPjxmb250IGZhY2U9IkFyaWFs Ij48YnI+DQogICAgICA8Yj48Rk9OVCANCiAgICAgIGNvbG9yPSMwMDAwY2M+ VEhFIE5BTUUgVEhBVCBNRUFOUyBTRUNVUklUWSBGT1IgWU9VUiBQRVJTT05B TCANCiAgICAgIENPTVBVVEVSPC9GT05UPjwvYj48L2ZvbnQ+PC9wPg0KICAg ICAgPHAgYWxpZ249ImxlZnQiPjxmb250IGZhY2U9IkFyaWFsIiBjb2xvcj0i IzAwMDBmZiI+Jm5ic3A7PEZPTlQgY29sb3I9IzAwMDAwMD5EbyBub3QgbGVh dmUgeW91ciBQQyBvcGVuIGZvciBhIHZpcnVzIHRoYXQgY291bGQgZGVzdHJv eSB5b3VyIGRhdGEuPGJyPg0KICAgICAgJm5ic3A7TWNBZmVlIFZpcnVzU2Nh biBpcyB0aGUgIzEgc29mdHdhcmUgdXNlZCBieSBDb3Jwb3JhdGUgQnVzaW5l c3NlcyB0bzxicj4NCiAgICAgICZuYnNwO3Byb3RlY3QgUENzIGFnYWluc3Qg dmlydXMgdHJhbnNtaXNzaW9ucyB2aWEgdGhlIGludGVybmV0LCBkaXNrcywg Q0RzPGJyPg0KICAgICAgJm5ic3A7b3IgYW55IG1lZGlhIHRoYXQgbWlnaHQg dHJhbnNtaXQgZGFtYWdpbmcgdmlydXNlcy4gIEJ1c2luZXNzZXMgdHJ1c3Q8 YnI+DQogICAgICAmbmJzcDtNY0FmZWUgVmlydXNTY2FuIG92ZXIgTm9ydG9u IEFudGlWaXJ1cyB0byBwcm90ZWN0IGFuZCBzYWZlZ3VhcmQ8YnI+DQogICAg ICAmbmJzcDttaWxsaW9ucyBvZiBQQ3MgYXJvdW5kIHRoZSB3b3JsZC48YnI+ DQogICAgICA8YnI+DQogICAgICAmbmJzcDtJbmNsdWRlcyAtIEZlYXR1cmUt UGFja2VkIFV0aWxpdGllcy4uLkFMTCBGb3IgT05FIFNwZWNpYWwgPGZvbnQg c2l6ZT0iMiI+PGI+IExPVzwvYj48L2ZvbnQ+IA0KICAgICAgUHJpY2UhPC9G T05UPiA8L2ZvbnQ+PC9wPg0KICAgICAgPGRpdiBhbGlnbj0ibGVmdCI+DQog ICAgICAgIDxibG9ja3F1b3RlPg0KICAgICAgICAgIDxwcmUgYWxpZ249Imxl ZnQiPjxmb250IGZhY2U9IkFyaWFsIiA+LSBNY0FmZWUgVmlydXNTY2FuIDcu MA0KLSBNY0FmZWUgRmlyZXdhbGwNCi0gSW50dWl0aXZlIEludGVyZmFjZQ0K LSBTY2FucyAmYW1wOyBDbGVhbnMgSW5zdGFudC1NZXNzZW5nZXIgQXR0YWNo bWVudHMNCi0gRmFzdGVyIFNjYW5uaW5nIFNwZWVkcw0KLSA8Yj48aT5SZWNl aXZlZCA1LVN0YXJzIGZyb20gUEMgTWFnYXppbmUgJmFtcDsgQ05FVCBFZGl0 b3JzJ3MgQ2hvaWNlIEF3YXJkPC9pPjwvYj4NCi0gU2NhbnMgZm9yIFZpcnVz ZXMgRmFzdGVyIFRoYW4gTm9ydG9uIEFudGlWaXJ1czwvZm9udD48L3ByZT4N CiAgICAgICAgPC9ibG9ja3F1b3RlPg0KICAgICAgPC9kaXY+DQo8IS0tIDUw NjN3V0NRMS00NTNLTXRUOTg2Mm1YamYwLTU2NlVGb3c4NjYyZnlOdTktMjU3 ZFBySDAxNTNZdWhpMi05Njd4T3JBMGw2NSAtLT4NCiAgICAgIDxkaXYgYWxp Z249ImxlZnQiPg0KICAgICAgICA8YmxvY2txdW90ZT4NCiAgICAgICAgICA8 cHJlIGFsaWduPSJsZWZ0Ij48Zm9udCBmYWNlPSJBcmlhbCIgPlRoaXMgU29m dHdhcmUgV2lsbDoNCg0KLSBQcm90ZWN0IHlvdXIgY29tcHV0ZXIgZnJvbSB1 bndhbnRlZCBhbmQgaGF6YXJkb3VzIHZpcnVzZXMNCi0gSGVscCBzZWN1cmUg eW91ciBwcml2YXRlICZhbXA7IHZhbHVhYmxlIGluZm9ybWF0aW9uDQotIEFs bG93IHlvdSB0byB0cmFuc2ZlciBmaWxlcyBhbmQgc2VuZCBlLW1haWxzIHNh ZmVseTwvZm9udD48L3ByZT4NCiAgICAgICAgPC9ibG9ja3F1b3RlPg0KICAg ICAgPC9kaXY+DQogICAgICA8cCBhbGlnbj0iY2VudGVyIj48Zm9udCBmYWNl PSJBcmlhbCI+PGI+VHdvIEZlYXR1cmUtUGFja2VkIFV0aWxpdGllcy4uLkZv ciBPbmUgR3JlYXQgUHJpY2UhDQogICAgICAtLSBBICQxMDAtUGx1cyBDb21i aW5lZCBSZXRhaWwgVmFsdWUhPGJyPg0KICAgICAgPGJyPg0KICAgICAgPGEg aHJlZj0iaHR0cDovL3d3dy5zb2Z0d2FyZWxpbWl0ZWQuY29tL21jYWZlZTM4 Lmh0bS9tY2FmZWUzOC5odG0iPjxGT05UIGNvbG9yPSMwMDAwZmY+WU9VUlMg Zm9yIE9ubHkgJDI5Ljk5ISAoSW5jbHVkZXMgDQogICAgICBGUkVFKiogU2hp cHBpbmcgVE9PISk8L0ZPTlQ+ICAgICAgIDwvYT48L2I+PGJyPg0KICAgICAg PGJyPjxmb250IHNpemU9IjIiPiZuYnNwOw0KICAgICAgPC9mb250PjxGT05U IGNvbG9yPSNmZjAwMDAgc2l6ZT0xPkRvbid0IGZhbGwgcHJleSANCiAgICAg IHRvIGRlc3RydWN0aXZlIHZpcnVzZXMgb3IgaGFja2VycyEgUHJvdGVjdCB5 b3VyIGNvbXB1dGVyIGFuZCB5b3VyIHZhbHVhYmxlIA0KICAgICAgaW5mb3Jt YXRpb24hPGJyPjwvRk9OVD48L2ZvbnQ+PC9wPjwvdGQ+DQogIDwvdHI+DQog IDx0cj4NCiAgICA8dGQgd2lkdGg9IjEwMCUiIHN0eWxlPSJCQUNLR1JPVU5E LUNPTE9SOiAjMDAwMDAwIj4NCiAgICAgIDxwIGFsaWduPSJjZW50ZXIiPjxm b250IGZhY2U9IkFyaWFsIEJsYWNrIiBjb2xvcj0iI2ZmZmYwMCI+PGJyPg0K ICAgICAgQ0xJQ0sgPC9mb250PjxhIGhyZWY9Imh0dHA6Ly93d3cuc29mdHdh cmVsaW1pdGVkLmNvbS9tY2FmZWUzOC5odG0vbWNhZmVlMzguaHRtIj48Zm9u dCBmYWNlPSJBcmlhbCBCbGFjayIgY29sb3I9IiNmZjAwMDAiPiBIRVJFPC9m b250PjwvYT48Zm9udCBmYWNlPSJBcmlhbCBCbGFjayIgY29sb3I9IiNmZmZm MDAiPiB0byBPcmRlciBZb3VycyBOT1chJm5ic3A7PGJyPg0KICAgICAgPC9m b250Pjxmb250IGNvbG9yPSIjZmZmZjAwIiBmYWNlPSJBcmlhbCBOYXJyb3ci IHNpemU9IjEiPihTU0wgUHJvdGVjdGVkDQogICAgICBPcmRlciBQYWdlKSZu YnNwOyBQTFVTICEmbmJzcDsgKEZyZWUgU2hpcHBpbmcgVG9vISk8YnI+PEJS Pg0KICAgICAgPC9mb250PjwvcD48L3RkPg0KICA8L3RyPg0KICA8dHI+DQo8 IS0tIDUwNjN3V0NRMS00NTNLTXRUOTg2Mm1YamYwLTU2NlVGb3c4NjYyZnlO dTktMjU3ZFBySDAxNTNZdWhpMi05Njd4T3JBMGw2NSAtLT4NCiAgICA8dGQg d2lkdGg9IjEwMCUiIHN0eWxlPSJCQUNLR1JPVU5ELUNPTE9SOiAjODA4MDgw Ij4NCiAgICAgIDxwIGFsaWduPSJjZW50ZXIiPjxiPjxmb250IGZhY2U9IkFy aWFsIE5hcnJvdyIgc2l6ZT0iMiIgY29sb3I9IiNmZmZmMDAiPk9wdC1PdXQg SW5zdHJ1Y3Rpb25zOjwvZm9udD48L2I+PGZvbnQgZmFjZT0iQXJpYWwgTmFy cm93IiBzaXplPSIyIiBjb2xvcj0iI2ZmZmYwMCI+PGJyPg0KICAgICAgV2Ug YXJlIHN0cm9uZ2x5IGFnYWluc3Qgc2VuZGluZyB1bnNvbGljaXRlZCBlbWFp bHMgdG8gdGhvc2Ugd2hvIGRvIG5vdCB3aXNoIHRvIHJlY2VpdmUgb3VyIHNw ZWNpYWwgbWFpbGluZ3MuIFlvdSBoYXZlIG9wdGVkDQogICAgICBpbiB0byBv bmUgb3IgbW9yZSBvZiBvdXIgYWZmaWxpYXRlIHNpdGVzIHJlcXVlc3Rpbmcg dG8gYmUgbm90aWZpZWQgb2YgYW55IHNwZWNpYWwgb2ZmZXJzIHdlIG1heSBy dW4gZnJvbSB0aW1lIHRvIHRpbWUuIFdlIGFsc28gaGF2ZQ0KICAgICAgYXR0 YWluZWQgdGhlIHNlcnZpY2VzIG9mIGFuIGluZGVwZW5kZW50IDNyZCBwYXJ0 eSB0byBvdmVybG9vayBsaXN0IG1hbmFnZW1lbnQgYW5kIHJlbW92YWwgc2Vy dmljZXMuIFRoaXMgaXMgTk9UIHVuc29saWNpdGVkIGVtYWlsLg0KICAgICAg SWYgeW91IGRvIG5vdCB3aXNoIHRvIHJlY2VpdmUgZnVydGhlciBtYWlsaW5n cywgcGxlYXNlIHZpc2l0IHRoZSBsaW5rIGJlbG93IGJlIHJlbW92ZWQgZnJv bSB0aGUgbGlzdC4gUGxlYXNlIGFjY2VwdCBvdXIgYXBvbG9naWVzDQogICAg ICBpZiB5b3UgaGF2ZSBiZWVuIHNlbnQgdGhpcyBlbWFpbCBpbiBlcnJvci4g V2UgaG9ub3VyIGFsbCByZW1vdmFsIHJlcXVlc3RzLiBTdWJtaXQgeW91ciBy ZW1vdmUgcmVxdWVzdA0KICAgICAgPGI+PGEgaHJlZj0iaHR0cDovL3d3dy5z b2Z0d2FyZWxpbWl0ZWQuY29tL21jYWZlZTM4Lmh0bS9yZW1vdmVtZS5odG1s Ij5IRVJFICE8L2E+IDwvYj48L2ZvbnQ+PC9wPjwvdGQ+DQogIDwvdHI+DQo8 L3RhYmxlPg0KPC9ib2R5Pg0KPC9odG1sPg0KDQo3NzEzUEJYczUtODUwUm92 QzIyMDNjallsMjM= |
From: Sigurd U. <sig...@li...> - 2002-12-04 13:59:48
|
"Aneesh Kumar K.V" <ane...@di...> writes: > On Wed, 2002-12-04 at 05:47, Brian J. Watson wrote: > > SSI-specific commands that might be run before /usr is mounted, such as > > cmount, should not be installed under /usr. They should be installed in > > /bin or /sbin. In the case of cmount, it should be installed in /bin, > > just like the mount command. > > > > Here is the confusion. I am yet figure out a method by which i can > install some binaries in /usr/bin and some in /bin. What i found is i > can have only two type of binaries sbin_PROGRAMS and bin_PROGRAMS. now > /usr/sbin is nothing but sbin with prefix /usr. and prefix is something > that is global to the cluster-tools. Any how i will experiment and try > to figure out some way. ( I don't want to do it in install-local ) > > Till we solve the above can we say that all the cluster-tools binaries > will go to either /bin or /sbin ( that is with prefix=/ ) Shouldn't that be solveable by just extending the possible destinationcategories from sbin_PROGRAMS and bin_PROGRAMS to also include usr_sbin_PROGRAMS and usr_bin_PROGRAMS? I believe the LVM-people install to both / and /usr directories (they had an relase that only put things into /usr, which broke my system quite spectacularily when I had /usr on LVM (who was first, the egg or the hen?). Maybe there is something to gain by takeing a look at their Makefiles? > I will make cmount installable at /bin. goodie:) -sig -- Sigurd Urdahl sig...@li... Systemkonsulent | Systems consultant www.linpro.no LIN PRO can improve the health of people who consume the eggs, meat and milk [..] (http://www.werneragra.com/linpro.html) |
From: Aneesh K. K.V <ane...@di...> - 2002-12-04 03:06:46
|
On Wed, 2002-12-04 at 05:47, Brian J. Watson wrote: > > If we have a separate /usr partition the use of /usr/sbin/chroot in > > linuxrc will fail. How do we fix this ? > > Back in the days of NonStop Clusters for UnixWare, /usr had to be part > of the root file system. I can't remember the precise reason why, but I > doubt we need to have the same restriction on Linux. > > To solve the chroot problem, we could just copy it into /bin of the > initrd and run it out of there. > But after pivot_root my root will change and /bin in initrd is now /initrd/bin/ > SSI-specific commands that might be run before /usr is mounted, such as > cmount, should not be installed under /usr. They should be installed in > /bin or /sbin. In the case of cmount, it should be installed in /bin, > just like the mount command. > Here is the confusion. I am yet figure out a method by which i can install some binaries in /usr/bin and some in /bin. What i found is i can have only two type of binaries sbin_PROGRAMS and bin_PROGRAMS. now /usr/sbin is nothing but sbin with prefix /usr. and prefix is something that is global to the cluster-tools. Any how i will experiment and try to figure out some way. ( I don't want to do it in install-local ) Till we solve the above can we say that all the cluster-tools binaries will go to either /bin or /sbin ( that is with prefix=/ ) I will make cmount installable at /bin. > > I have fixed some of the path issues in the CVS by removing the path and > > just specifying the binary . ie instead of /usr/sbin/cmount i used > > cmount. If it is not advisable to do the above please feel free to > > modify. > > It's not a problem as long as the PATH variable is guaranteed to contain > the command's path. Just to be safe, the PATH variable should be set at > the top of a script if any commands outside of /bin are being called. > Now that it is going to be in /bin i guess it is ok ? > BTW, does your new Cluster Tools build system support installing into a > UML root image? I remember there were a few special things that needed > to be done, like installing a pre-written /etc/clustertab. > I guess the $DESTDIR should help in getting that. I have taken care to install all the local installation ( scripts and conf files ) under $DESTDIR/$prefix. So setting the DESTDIR with uml root should work .But I haven't tested the above. For installing clustertab one need to run configre_cluster. I haven't made it as a part of cluster-tools installation. > -Brian > > -aneesh |
From: Brian J. W. <Bri...@hp...> - 2002-12-04 00:22:30
|
> If we have a separate /usr partition the use of /usr/sbin/chroot in > linuxrc will fail. How do we fix this ? Back in the days of NonStop Clusters for UnixWare, /usr had to be part of the root file system. I can't remember the precise reason why, but I doubt we need to have the same restriction on Linux. To solve the chroot problem, we could just copy it into /bin of the initrd and run it out of there. SSI-specific commands that might be run before /usr is mounted, such as cmount, should not be installed under /usr. They should be installed in /bin or /sbin. In the case of cmount, it should be installed in /bin, just like the mount command. > I have fixed some of the path issues in the CVS by removing the path and > just specifying the binary . ie instead of /usr/sbin/cmount i used > cmount. If it is not advisable to do the above please feel free to > modify. It's not a problem as long as the PATH variable is guaranteed to contain the command's path. Just to be safe, the PATH variable should be set at the top of a script if any commands outside of /bin are being called. BTW, does your new Cluster Tools build system support installing into a UML root image? I remember there were a few special things that needed to be done, like installing a pre-written /etc/clustertab. -Brian |
From: Aneesh K. K.V <ane...@di...> - 2002-11-29 03:40:45
|
Hi, If we have a separate /usr partition the use of /usr/sbin/chroot in linuxrc will fail. How do we fix this ? I have fixed some of the path issues in the CVS by removing the path and just specifying the binary . ie instead of /usr/sbin/cmount i used cmount. If it is not advisable to do the above please feel free to modify. -anesh |
From: Aneesh K. K.V <ane...@di...> - 2002-11-29 02:55:09
|
Hi, BTW edit the /etc/rcSSI.d/S35mountall.sh to make use of /sbin/cmount. I am not doing a checkin of this path breakage ( and similar others ) because it is going to break the old cluster-tools installation. In my opinion right now very few are using the new build scripts for cluster-tools ( me and Sigurd ) So we can live with editing these files by giving the leisure to others to just do a make install_ssi_redhat :) Dropping of existing Makefile and correcting of such path will be later done in one checkin. -aneesh On Fri, 2002-11-29 at 08:18, Aneesh Kumar K.V wrote: > Hi, > > That cmount is installed by the old cluster-tools script. IF you try to > build the CVS version cluster-tools with the new build script all those > file will be installed as I said under $(DESTDIR)/$(prefix)/sbin/ and by > default the prefix is /usr/local/. ( ./configure --help ) So you may > need to configure it with a prefix / . ie ./configure --prefix=/ > > > First step would be to get the released cluster-tools. Do a make > uninstall so the files installed by the old cluster-tools are removed > and then use the CVS version with the new build script. I haven't tested > the initrd building with with the new location of the files. > > -aneesh > > > On Thu, 2002-11-28 at 20:39, Sigurd Urdahl wrote: > > "Aneesh Kumar K.V" <ane...@di...> writes: > > > > > I have made some changes wrt the location where binaries are installed. > > > Please take a look at cmd/Makefile.am . All the files are now installed > > > in $(DESTDIR)/$(prefix)/sbin/. > > > > > > Please let me know if you run into any of the issues ? > > > > It seem cmount still is installed into /usr/sbin. > > > > I believe it shouldn't be since it is used quite early on in the boot > > process. If I remember correctly from last night it is used in > > S35mountall.sh at least on Debian. I believe you have no guarantee > > that /usr is available until after that part of the boot.. (my test > > system have /usr on LVM and is stuck with the mountall-script failing > > till I attack it with a rescue disk later on tonight):) > > > > regards, > > -sig > > > > > > -- > > Sigurd Urdahl sig...@li... > > Systemkonsulent | Systems consultant www.linpro.no > > LIN PRO can improve the health of people who consume the eggs, > > meat and milk [..] (http://www.werneragra.com/linpro.html) > |