You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(25) |
Jul
(13) |
Aug
(11) |
Sep
(14) |
Oct
(5) |
Nov
(7) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(14) |
Feb
(10) |
Mar
(30) |
Apr
(9) |
May
(20) |
Jun
(12) |
Jul
(7) |
Aug
(6) |
Sep
(6) |
Oct
(34) |
Nov
(14) |
Dec
(9) |
2003 |
Jan
|
Feb
(9) |
Mar
(2) |
Apr
(2) |
May
(5) |
Jun
(14) |
Jul
(1) |
Aug
(7) |
Sep
(6) |
Oct
(5) |
Nov
|
Dec
|
2004 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(2) |
Aug
(2) |
Sep
(4) |
Oct
|
Nov
(7) |
Dec
(1) |
2005 |
Jan
(1) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(5) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(11) |
Jun
(2) |
Jul
|
Aug
(5) |
Sep
(5) |
Oct
(1) |
Nov
(1) |
Dec
|
2007 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2011 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <ne...@bl...> - 2002-09-18 15:32:14
|
Dear IT reseller and supplier, We are on the lookout for new partners such as you. Whether you want to buy or sell, BlueCom Danmark A/S is worth closer acquaintance. We have found you and your company through our Internet research, and hope to establish a fruitful cooperation with you in the future. However, we apologize if you are not the correct person to contact within your company, and kindly request that you forward this message to that person. Further, if this letter is of no interest to you and your company, please press the "unsubscribe" button. Click here: http://www.bluecom.com/unsubscribe/ Would you like the best prices on the market, plus 24-hour delivery and an incredible 30 days credit? Then you might like to become a reseller partner. Besides delivering at the very best prices, we offer Real-Time updated prices through the Internet. We are able to offer the absolute best prices on the market because of our large-scale procurement of a small number of specific products. Our range includes products from IBM, Compaq and HP Options such as Notebooks, PCs and Monitors. We also offer PC parts such as RAM, CPUs, VGA cards, Motherboards, Wireless products, original Mobile Phones and accessories, Hard disks, DVD, CD-RW, TFT screens, from the following top companies: ASUS, ECS, ABIT, CREATIVE, INTEL, AMD, U.S. ROBOTICS, LG, PLEXTOR, BELKIN, BENQ, SAMSUNG, IBM SOFTWARE and others. We are also on the on the lookout for new suppliers. BlueCom Danmark A/S keeps it suppliers constantly updated on products in demand; the specific volumes on request, and of course the target prices. The easiest way to start cooperating with BlueCom Danmark A/S is through our user-friendly website. If any of the features we offer are of interest to your company, please click on the link and our cooperation has already begun. Click here: http://www.bluecom.com/oem-pre.cfm Click here: http://www.bluecom.com/scm/start.cfm? BlueCom Danmark A/S is a worldwide IT-distributor of PC-systems and add-ons. Since the foundation in 1992 the company has enjoyed constant growth, and is always on the look out for new partners As an added bonus we offer you and up-to-date channel IT News relevant to both resellers and suppliers. We produce a newsletter with articles covering the changes in our industry, new products, tariff rates and general trends in the IT-market. The newsletter also contains information about BlueCom Danmark A/S and the development of its business partners. Click here: http://www.bluecom.com/rootfiles/it_news.cfm Would you like more information about BlueCom Danmark A/S, please visit our homepage at www.bluecom.com or contact us via e-mail or telephone. Thanks for your time. We look forward to hear from you. Best regards BlueCom Danmark A/S PeterGarber Director of Sales And Charlotte E. Larsen Purchasing Manager |
From: Brian J. W. <Bri...@hp...> - 2002-09-03 19:03:35
|
The biggest feature enhancement is coherent cluster-wide shared mapped file support through CFS. This allows programs such as RPM and keepalive to work with a CFS root. Another enhancement is the addition of a few more remote socket ops that allow accept(), getsockname(), and getpeername() to be called on a socket that is remote from its process. Also, --bind mounts are now completely local, which is a quick-and-dirty way to allow context-dependent symlinks (CDSLs). True CDSLs will be available sometime in the future. This release includes fixes for various build problems. It also fixes some problems with large numbers of nodes joining simultaneously. ICS and CLMS logging has been improved, and a bunch of other bugs have been squashed. Included is a new version of Cluster Tools. It changes the HA LVS configuration to use an XML format. It makes init get the maxnodes value from the kernel, so that init doesn't need to be recompiled for a larger cluster. Also, init has been fixed so that it doesn't SIGSEGV if it's built with DEBUG enabled. The build scripts no longer depend on PATH containing anything other than /bin. The cluster_lilo command now properly supports lilo's -R option. Overall this release improves on 0.7.0 by fixing bugs and adding a few new cool features. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: David B. Z. <dav...@hp...> - 2002-08-30 01:49:26
|
Aneesh Kumar K.V wrote: > On Thu, 2002-08-29 at 07:56, Brian J. Watson wrote: > >>I'm going to crank out a 0.7.1 release of SSI and Cluster Tools >>tomorrow. Here are the proposed CHANGES. If you have any other issues >>you'd like resolved in this release, or you have a correction to the >>CHANGES, please let me know before tomorrow afternoon (Pacific time). >> >>-Brian > > > I have posted a new cfs_mount and cfs_swapon. Any body have any comments > on that ? It helps in making fstab clusterwide. > > > -aneesh > Bruce, Brian and I have gone over your design for a cluster-wide fstab. We've agreed to the /etc/fstab.ssi design, but we have a couple of changes. In addition to mount and swapon we want the fsck and dump commands to also look at fstab.ssi when running with an SSI kernel. All these commands must operate exactly as they did before if the kernel is NOT SSI (look at /etc/fstab). This allows us to replace the base commands with our modified version and still boot the base linux kernel. We want to save the original base command as *.orig during installation. In the options part of the fstab.ssi we want to add new a new option "node=#". This field when not specified defaults to node=1. The node=# option is be parsed by my_getmntent() and if the node # matches the current node number the entry is returned, but "node=#" is removed from the option string. We don't want the node=# to end up in /etc/mtab or seen by filesystem specific code. It controls my_getmntent() operations instead of looking at device names. If "node=*" is specified then the entry is always returned by my_getmntent(). Instead of hard coding MS_CFS flag setting based on specific filesystems. We want to add a new file with the following format: fstype cluster defaults ext2 cfs gfs parallel ext3 cfs The mount command will read this file and depending on the filesystem type will automatically set the appropriate flags. For now "cfs" will be the only option that does anything by setting MS_CFS. The "parallel" option can be accepted, but doesn't do anything internally for now. We haven't specified the filename for this file. During installation /etc/fstab should be copied to /etc/fstab.ssi with additional comments at the top about what is expected in a cluster environment. Because of the defaults minimal or no modifications to the /etc/fstab.ssi are required at this point. When running addnode the user is prompted for a swap device name which is appended to /etc/fstab.ssi for that node to swapon when booting. In order to support filesystems on dependent nodes in the new fstab.ssi, we need to modified rc.sysinit.nodeup to perform fsck and mount operations which will be using the newly developed commands and this read the node=# parts of fstab.ssi during dependent node boot. -- David B. Zafman | Hewlett-Packard Company Linux Kernel Developer | Open SSI Clustering Project mailto:dav...@hp... | http://www.hp.com "Thus spake the master programmer: When you have learned to snatch the error code from the trap frame, it will be time for you to leave." |
From: Aneesh K. K.V <ane...@di...> - 2002-08-29 10:29:10
|
On Thu, 2002-08-29 at 07:56, Brian J. Watson wrote: > I'm going to crank out a 0.7.1 release of SSI and Cluster Tools > tomorrow. Here are the proposed CHANGES. If you have any other issues > you'd like resolved in this release, or you have a correction to the > CHANGES, please let me know before tomorrow afternoon (Pacific time). > > -Brian I have posted a new cfs_mount and cfs_swapon. Any body have any comments on that ? It helps in making fstab clusterwide. -aneesh |
From: Brian J. W. <Bri...@hp...> - 2002-08-29 02:31:15
|
I'm going to crank out a 0.7.1 release of SSI and Cluster Tools tomorrow. Here are the proposed CHANGES. If you have any other issues you'd like resolved in this release, or you have a correction to the CHANGES, please let me know before tomorrow afternoon (Pacific time). -Brian ssi-linux-2.4.16-v0.7.1 ======================= - Added the accept, getname, and release remote socket ops (Brian Watson) - Fix the deadlock when reading /proc/<pid>/pin (Aneesh Kumar) - Coherent cluster-wide shared mapped file support (David Zafman) - Added BKL to rssidev_ioctl(). (Locking ASSERTs fail without it.) (John Byrne) - Fixed dangling point in surrogate origin handling. (John Byrne) - Fixed ssidevfs code not to go berserk when large numbers of nodes join simultaneously. (John Byrne) - Fixed bug in clms_client_startup_node() where incorrect lengths were being passed to ICS. (John Byrne) - Added CLMS logging for clms_api_rw_lock (John Byrne) - Modified low-level ICS not to require memory allocations (John Byrne) - Fixed an inode syncing problem in CFS (David Zafman) - Made --bind mounts completely local (David Zafman) - Suppressed some compiler warnings (John Byrne) - Fixed a problem with duplicate PIDs in /proc (John Byrne) - Resolved various bugs in the load-leveler (Laura Ramirez) - Corrected a panic with message queues (Laura Ramirez) - Fixed a problem with building a kernel without the load-leveler (Aneesh Kumar) - Fixed a problem with building for Alpha (Aneesh Kumar) - Miscellaneous bugfixes cluster-tools-0.7.1 =================== - Changed the HA LVS configuration to xml format.(Aneesh Kumar) - Changed cluster_lilo to distribute the -R option better. (John Byrne) - Fixed init not to SIGSEGV if built for DEBUG. (John Byrne) - Fixed the build scripts to not depend on PATH containing anything other than /bin (Aneesh Kumar) - Improved ICS logging (John Byrne) - Eliminated memory allocations from low-level ICS code (John Byrne) - Made init get maxnodes from the kernel, rather than hardcode it. (John Byrne) - Miscellaneous bugfixes |
From: Brian J. W. <Bri...@hp...> - 2002-08-22 23:00:26
|
By doing so, you make releases easier. I won't have to scan through all the checkin messages to figure out what all's changed since the last release. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Brian J. W. <Bri...@hp...> - 2002-08-19 19:31:32
|
"Aneesh Kumar K.V" wrote: > Added Files: > ssi/rc.modules ssi/rc.nodedown ssi/rc.nodeup > ssi/rc.sysinit.nodeup > These files are distribution specific so moved them under ssi directory > Removed Files: > cmd/rc.modules cmd/rc.nodedown cmd/rc.nodeup > cmd/rc.sysinit.nodeup > These files are distribution specific so moved them under ssi directory Aneesh- The problem with moving files this way is that we lose history. IMHO, it's better to request that SourceForge copy (not move) the RCS files from the cmd/ directory to the ssi/ directory. You can do that through this URL: https://sourceforge.net/tracker/?func=add&group_id=1&atid=200001 Once they do that, I'll remove the 'dead' status of the copies in the ssi/ directory. You can try doing it yourself using the 'cvs admin -o' command, but it might require administrator access. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Brian J. W. <Bri...@hp...> - 2002-08-09 03:37:40
|
Spending a week on the beach without an Internet connection. Somehow I'll survive. ;) Be back on August 19th. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Brian J. W. <Bri...@hp...> - 2002-07-31 21:30:28
|
"Aneesh Kumar K.V" wrote: > Can we list down what we can expect for the next release > I am putting what i hope to see. > > (1) HA CFS It'll be non-HA CFS for now. Dave's working on the HA part, but it'll take some time. > (2) load leveling enhancement by laura I think that'll be in there. > (3) CI and SSI IA64 port. The CI code hasn't been tested by itself for awhile. Is there demand out there for another release of CI any time soon? > (4) devfs bug fix to enable mount to work. My impression is that it's more of a feature enhancement than a bug fix, so it won't be in the next release. John? Another enhancement for the next release is Dave's distributed mounting, so that a mount done on one node is seen on all nodes. He also stripped-down the nodeup script, so that dependent nodes come up faster. Something I've been working on is remote socket ops, so that bind(), connect(), send(), recv(), etc. can be done by a process that is remote from its socket. I've gotten a bit hung up on accept(), because it has the added wrinkle of creating a new socket. I've also been sidetracked by preparation for the LinuxWorld demo, so I'm not sure if remote socket ops will be ready in time for the next release. > I bet this release is going to get wide attention :) I hope so. :) > I would also like to bring dlm under /usr/src/linux/cluster directory. > That makes it a core part of SSI. What do you think, Bruce? -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Aneesh K. K.V <ane...@di...> - 2002-07-27 07:56:53
|
Hi, Congrats... So one more architecture to the list. wow!!!!!. On Sat, 2002-07-27 at 07:37, John Byrne wrote: > > CI is not done, but it shouldn't take much work to finish. (I'm not in > any hurry.) Load-levelling won't build; since the load-levelling code > uses floating point and this a problem on most architectures, the plan > is to modify it to use scaled integer arithmetic. I had some discussion with Moshe Bar regarding the same because for Alpha also it is disabled. He suggested either to wait for the new load leveling algorithm he is working on or to move o scaled integer. Any how the open mosix team is working on a IA64 port may be that would be helpful to us. But if it take them too much time to come up with the port I guess we should work towards it. > > I've checked out fresh and built SSI i386 and booted it (CFS). I'm > building UML. (I don't have a UML test setup at the moment, so I don't > plan to test it.) I'm building ia64 and will test that. I'll let you all > know later how they both do. > > Aneesh, if I've broken alpha (and I probably have), I'm sorry. > I will test this part. But i need some time. May be one week . Is that ok ? ( Right now i am a bit busy.) Can we list down what we can expect for the next release I am putting what i hope to see. (1) HA CFS (2) load leveling enhancement by laura (3) CI and SSI IA64 port. (4) devfs bug fix to enable mount to work. I bet this release is going to get wide attention :) I would also like to bring dlm under /usr/src/linux/cluster directory. That makes it a core part of SSI. IBM's DLM team seems to be not working any more on DLM. Either they think it is stable or ??. Any how brining DLM as sub part of SSI need considerable modification to DLM.I am not sure whether it worth doing This also involve moving the DLM user level libraries to cluster tools and removing dlmdu ( Or make it just to load DLM and then exit ). Once we make DLM as a part of SSI then i guess one can use it without fear :). Once we are with it next step would be to make opengfs to work with DLM. any comments ? -aneesh |
From: John B. <joh...@hp...> - 2002-07-27 02:13:13
|
CI is not done, but it shouldn't take much work to finish. (I'm not in any hurry.) Load-levelling won't build; since the load-levelling code uses floating point and this a problem on most architectures, the plan is to modify it to use scaled integer arithmetic. I've checked out fresh and built SSI i386 and booted it (CFS). I'm building UML. (I don't have a UML test setup at the moment, so I don't plan to test it.) I'm building ia64 and will test that. I'll let you all know later how they both do. Aneesh, if I've broken alpha (and I probably have), I'm sorry. John |
From: John B. <joh...@hp...> - 2002-07-26 03:09:17
|
John Byrne wrote: > > However, since most of the imports are in the ia64 arch directories, I > don't believe it will affect anyone directly. The two files that aren't, > I should be safe. This will be getting cleaned up and I'll try again. > > Sorry, > > John Sorry for the raging incoherence (I'm more tired than I thought, which is the reason I screwed up). What I mean to say is that the repository should be okay to use at this moment John |
From: John B. <joh...@hp...> - 2002-07-26 02:53:26
|
However, since most of the imports are in the ia64 arch directories, I don't believe it will affect anyone directly. The two files that aren't, I should be safe. This will be getting cleaned up and I'll try again. Sorry, John |
From: Aneesh K. K.V <ane...@di...> - 2002-07-25 13:29:05
|
Hi, On Thu, 2002-07-25 at 18:42, Harry Heinisch wrote: > On Thursday 25 July 2002 07:24, Aneesh Kumar K.V wrote: > > > > > Hi, > > > > Recently i was trying to build the latest Linux kernel on alpha but > > to be very sad found that none of the kernel versions (2.5.x ) actually > > compile on alpha. A look at the linux kernel mailing list showed many > > patches by different people. It is a chaotic state where one doesn't > > know what to pick to get a kernel that will build and work. > > alphalinux.org is gone and linuxalpha.org latest kernel link is broken. > > Most of the new features added to the Linux kernel like preemption etc > > and not at all there for Alpha( atleast i didn't find any patches ). > Patches from IVAN seem to be a good place to start. Jeff Wiedemeier > and Jay Estabrook are working on getting 2.5 patched with stuff Jeff has > produced to make a buildable 2.5 kernel. But still the problem of some one taking the responsibility remains. Again the location of finding these patches . Searching the linux kernel mailing list for alpha specific patches is a pain. Is there any chance of uploading this patches to a ftp site ( I mean a location similar to the redhat ISOs for Alpha ). Again who is goiing to make sure that patches are accepted by Linus. I guess Jay Estabrook is the right candidate for pushing all these patches to Linus :) > > > > > > I was wondering whether there is any plan from HP/Compaq side to > > maintain the Alpha Linux kernel. One release for Alpha for every 5/10 > > releases of Main kernel will be sufficient. This person will/can also be > > in charge of sending patches to Linus so that one can be sure at least > > some version of main line kernel will build on alpha.He can also work > > with the rest of community interested in alpha Linux so that one knows > > whom to contact to know about the status of alpha linux. > > > > > > any comments ? > > > > > > -aneesh > |
From: puablinux4oems <jgp...@si...> - 2002-07-23 19:14:03
|
Industrial LINUX News: The June issue of the ISA's InTech Magazine has an interesting article on how truly open Linux applications can lower development cost and increase the performance and reliability of industrial automation. A copy of the the article can be found at: http://www.sixnet-io.com/html_files/web_articles/linux_article_info.htm This Linux news update brought to you by: www.Linux4oems.info ------------------------------------------------------------------------------ If you don't want to receive future Linux news updates, please reply to this e-mail with the subject "unsubscribe". You may also unsubscribe or resolve subscription difficulties by calling SIXNET at 518-877-5173 or e-mailing: lin...@si... . mavraohvhvehisjnatxfreqpvyvh |
From: Alexandre C. <ale...@ca...> - 2002-06-19 11:53:58
|
> Hi Aneesh, > > >I have put the files at > >http://ssic-linux.sourceforge.net/contrib/ha-lvs.tar.gz . > > > >README.networking is also a part of the above tar.gz . If anybody need > >README.networking I can do a separate check-in of that file alone > > ??? why not using directly Keepalived instead of backporting code to > another tool ? since we have together added support to CI/SSI ? > > I don t understand :/ > > regards, > Alexandre Re, bellow, an example of Keepalived config for use with SSI : ! Configuration File for keepalived ! CI-LINUX configuration sample global_defs { notification_email { lvs...@do... } notification_email_from em...@do... smtp_server 192.168.200.1 smtp_connect_timeout 30 lvs_id CI-LNX } virtual_server 192.168.200.10 80 { delay_loop 10 lb_algo wrr lb_kind DR protocol TCP sorry_server 192.168.200.200 80 real_server 192.168.200.2 80 { weight 1 CI-LINUX } real_server 192.168.200.3 80 { weight 1 CI-LINUX } real_server 192.168.200.4 80 { weight 1 CI-LINUX } real_server 192.168.200.5 80 { weight 1 CI-LINUX } } Regards, Alexandre |
From: Alexandre C. <Ale...@wa...> - 2002-06-13 21:10:22
|
Hi all, I have just published the new code. The work was mainly done on the VRRP synchronization instance policy to speed up takeover when using many instances. The ChangeLog for the release is : 2002-06-13 Alexandre Cassen <ac...@li...> * keepalived-0.6.1 released. * Aneesh Kumar, <ane...@di...> and I added support to Cluster Infrastructure checkers. Providing HA-LVS for their cluster project (http://ci-linux.sourceforge.net/). The new checker added provide a derivation to the internal CI healthcheck mechanism. * Enhanced the Kernel netlink reflector to drive global healthcheckers activity. The policy implemented here is : If healthchecker is performing test on a service that belong to a VIP not owned by the director, then the healthchecker is suspended. This suspend/active state is particulary usefull if runing VRRP for HA => That way the backup LVS will not charge the realserver pool since LVS VIP is owned by master LVS. * Cosmetics patches into the vector lib. * VRRP : Rewrote the previous VRRP synchronization instance policy. Created a new config block called "vrrp_sync_group" that define VRRP instances synchronization dependences. That way we replace the previous "by-pair" sync approach by this "by-group" approach. This can be useefull for firewall HA with many NICs. Created a dedicated framework to speed up takeover synchronization. * VRRP : Added support to CIDR notation for VRRP VIPs definitions => VRRP VIPs definition like a.b.c.d/e. By default "e" value is set to 32. * VRRP : Added support to multicast source IP address selection => "mcast_src_ip" keyword. Can be usefull for strongly filtered env. The mcast group subscription is done using the NIC default IP after this mcast_src_ip is used if specified. * VRRP : Enhanced the link media failure detection. Added support to the new kernel SIOCETHTOOL probing for ETHTOOL_GLINK command. New drivers use this ETHTOOL interface to report link failure activity. During bootstrap a probe is done to determine the proper polling method to use for link media failure detection. The policy used is : probe for SIOCGMIIREG if not supported then try SIOCETHTOOL GLINK probe, otherwise use a ioctl SIOCGIFFLAGS polling function mirroring kernel NIC flags to localy reflected representation. * Ramon Kagan, <rk...@Yo...> and I updated the UserGuide.pdf. All comments are welcomes, Best regards, Alexandre _______________________________________________ LinuxVirtualServer.org mailing list - lvs...@Li... Send requests to lvs...@Li... or go to http://www.in-addr.de/mailman/listinfo/lvs-users |
From: Brian W. <bri...@ya...> - 2002-06-13 19:04:51
|
The SSI BoF is set for tonight (Thursday). As I said in a previous e-mail, CI related stuff is also appropriate for this BoF. Here's the info: Time: 6-7PM Place: San Diego room of the Monterey Marriott Moderator: Bruce Walker Sorry for the late notice. Hope to see some of you there! -- Brian Watson HP Software Developer Single System Image Clustering Project __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com |
From: Greg F. <fre...@No...> - 2002-06-12 16:02:00
|
>> Thought I would lob this directly at you. Have you looked at either the=20 >> clusternfs server or the vf patches to the kernel. These allow an SSI=20 >> and already work very well. I have been using them at home for a few=20 >> months as a work related study project. >> At work, I am creating a proposal on an path toward more linear cluster=20 >> size scalability. To that end, I have built a prototype at home and am=20 >> in the process of working out details and difficulties. The method=20 >> involves some proprietary hardware and inter-connect methods. >> Thanks, >> Robin Holt Robin, First, I have copied the SSI clustering and CI devel lists on this response. I = hope you don't mind. I thought they might find your comments about clusternfs and the vf patches of = interest. As well as your comments about scalability. =3D=3D=3D No I have not looked into clusternfs, nor the vf patches. Maybe the SSI linux = team will consider one of them as an alternative to OpenGFS, which does not = currently seem to be getting any developer support from I can see. It is the total SSI project that I find very interesting, not any single = component (i.e shared root), and I must say I am more of a active lurker on = that project than a participant. =20 If you take a look at the "projects" section of = http://ssic-linux.sourceforge.net/, you will see that the goals for that = project are well beyond a common root FS. (BTW, this section was just updated = over the weekend. The features section is still 6 months out of date, but they = are supposed to be updating it shortly, as well as putting up some recommended = configurations.) In the last couple of months they have added the ability to simulate a SSL = cluster by running several UML partitions on a single box. I am thinking = seriously about setting up a SSI/UML simulation in my lab. From what I gather, = the SSL project is very close to being usable for people willing to live on the = edge. In my case I have cpu intense app. that I am thinking of rewriting to = take advantage of the SSL project's handling of named pipes. =20 As to your own project, you may want to look at the CI (Cluster Infrastructure) = Linux project http://ci-linux.sourceforge.net/. Possibly it could help you = directly out of the box, or you might be able to leverage some of the existing = code. It has the same license as the linux kernel, so it should be useable by = you if the linux kernel is. Scalability is currently in the 10-50 range. They hope to get that into the = hundreds next year. CI Linux seems to be fairly stable from what I have seen. Unfortunately, their = webpage is out of date as well, and at least a few of "open" projects have now = been done. FYI: The SSI project and the CI project are very closely related. The CI = project is designed to be a cluster neutral set of low-level clustering tools. = The SSI project uses the CI project for its low-level needs. A couple of significant CI capabilities that are not listed on the webpage: 1) Provides a cluster wide PID. IIRC, It does this by extending the PID to 64 = bits and using the high-order bits as a node identifier. 2) LVS patches are available to let it use CI for HA purposes. The LVS add on = project keepalived is actually used to do this. 3) DLM patches are available to let it use CI for HA purposes. This might = actually be a SSI related feature, but I think it is CI. Greg Freemyer Internet Engineer Deployment and Integration Specialist Compaq ASE - Tru64 Compaq Master ASE - SAN Architect The Norcross Group www.NorcrossGroup.com |
From: Brian J. W. <Bri...@hp...> - 2002-06-12 00:43:49
|
"Brian J. Watson" wrote: > > Just wanted to mention that Bruce and I will be at the Usenix Technical > Conference this week (Thu.-Sat. in Monterey, CA). Hopefully we'll see > some of you there. I just sent in a request to set up a Single System Image Clustering BoF for Thursday night. I asked for 6PM, but we'll see what we get. When I find out the time and place I'll send an e-mail out on the mailing lists. Discussion related to CI would also be welcome at this BoF. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Brian J. W. <Bri...@hp...> - 2002-06-11 21:45:44
|
Just wanted to mention that Bruce and I will be at the Usenix Technical Conference this week (Thu.-Sat. in Monterey, CA). Hopefully we'll see some of you there. -- Brian Watson | "Now I don't know, but I been told it's Software Developer | hard to run with the weight of gold, Open SSI Clustering Project | Other hand I heard it said, it's Hewlett-Packard Company | just as hard with the weight of lead." | -Robert Hunter, 1970 mailto:Bri...@hp... http://opensource.compaq.com/ |
From: Aneesh K. K.V <ane...@di...> - 2002-06-11 09:17:17
|
Hi, In that case the service will be stated by its own script . Not by run levels i guess. That means we won't have /etc/rc3.d/Sxxserver_name file. -aneesh On Tue, 2002-06-11 at 14:43, John Hughes wrote: > > I guess we need to make sure that each node sees different /var/run/ > > directory so that we can start these servers on all the nodes at the > > same time without modifying any server code. > > And how do we deal with servers that need to run only one copy on > the cluster? |
From: John H. <Jo...@Ca...> - 2002-06-11 09:13:58
|
> I guess we need to make sure that each node sees different /var/run/ > directory so that we can start these servers on all the nodes at the > same time without modifying any server code. And how do we deal with servers that need to run only one copy on the cluster? |
From: Aneesh K. K.V <ane...@di...> - 2002-06-11 06:04:16
|
Hi, Ok that sounds to be a good idea.For cluster we will give an upgrade to packages initscripts in redhat and sysvinit in Debian.( may be initscripts-cluster and sysvinit-cluster ) We can also ask the user to install this as a part of creating cluster. Regarding /var/run/service_name.pid. The general logic used by many of the xinetd services are see whether service_name.pid exist . If then read the PID.Do kill(pid,0) to see if the service is really running. If so exit if not recreate the service_name.pid file with the new pid entry. In our case with clusterwide signaling we will be able to signal any application running on other nodes. Now if I start any service that is multi instance( That is running on all the nodes. May be a load balanced web server )we need to make sure that the server on node1 and server on node2 reads different service_name.pid Or they change the above logic explained above( Which is guess is going to be tough job ). I guess we need to make sure that each node sees different /var/run/ directory so that we can start these servers on all the nodes at the same time without modifying any server code. -aneesh On Mon, 2002-06-10 at 23:23, David B. Zafman wrote: > > Below is something I wrote up last week, but was waiting for Bruce to > comment on it before sending it out. Now that I see what you've done > with /etc/init.d/clusterinit, I thought I'd send this out. I will > examine what you've done today. > > Last week I did something similiar. I wasn't as concerned with the > dependent node networking, but I wanted to replace rc.sysinit for > dependent nodes only. I copied the redhat rc.sysinit to > rc.sysinit.nodeup and removed all the things which the dependent nodes > should not be duplicating. I also removed the execution of rc for run > level 3 from rc.nodeup. Keep in mind that only the first booting node > runs rc.sysinit just like base linux. Since only dependent nodes run > rc.nodeup, only the dependent nodes run rc.sysinit.nodeup. > > --------- > > You've brought up an important architectural issue. Once there is a > single root it requires clusterization to have duplicate services > running. One way to clusterize things is adding context dependent links > (i.e. /var/run as you proposed for the *.pid files). > > The current set-up of having rc.nodeup call rc.sysinit then running > complete rc 3 runlevel processing was fine when we had a non-shared > root. Now with CFS and GFS we really need to NOT do this. Looking at > rc.sysinit on a redhat install, I see that it does all sorts of stuff > which should NOT be done again on a joining node in the shared-root case. > > In a cluster there would generally be two kinds of services. The first > kind is a single instance of the service (single process or set of > processes on one node) running with keepalive to restart it on node > failures. The second kind is the service that is cluster aware, so that > processes could exist on multiple nodes, but they cooperate with each > other. In non-stop clusters we parallelized inetd, for example. It > maintained processes on all nodes, and kept a list of pids which it > updated as nodes came and went. > > The whole /var/run/service_name.pid mechanism I would propose is only > used for non-cluster aware serives which are restricted to running on > the root node, but may be restarted on node failure. It is assumed that > to restart the service we might have to remove the .pid file and then on > (re)start the service would create the file again with the new pid. > > > Aneesh Kumar K.V wrote: > > > Hi, > > > > I guess we need to have node specific /var/run directory also. > > > > Otherwise on debian some sevices may not come up on node2. They check > > /var/run/service_name.pid file to see whether the service is already > > running or not. > > > > That make it for debian /etc/init.d/rcS add these lines before doing > > the for loop show below > > > > # > > # Cluster specific remounts. > > # > > # > > mount --bind /etc/network-`/usr/sbin/clusternode_num` /etc/network > > mount --bind /run-`/usr/sbin/clusternode_num` /var/run > > > > # > > # Call all parts in order. > > # > > for i in /etc/rcS.d/S??* > > > > > > -aneesh > > > > > > > > > > > > _______________________________________________________________ > > > > Don't miss the 2002 Sprint PCS Application Developer's Conference > > August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm > > > > _______________________________________________ > > ssic-linux-devel mailing list > > ssi...@li... > > https://lists.sourceforge.net/lists/listinfo/ssic-linux-devel > > > > > > > -- > David B. Zafman | Hewlett-Packard Company > Linux Kernel Developer | Open SSI Clustering Project > mailto:dav...@hp... | http://www.hp.com > "Thus spake the master programmer: When you have learned to snatch > the error code from the trap frame, it will be time for you to leave." > > > > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's Conference > August 25-28 in Las Vegas - http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink > > _______________________________________________ > ssic-linux-devel mailing list > ssi...@li... > https://lists.sourceforge.net/lists/listinfo/ssic-linux-devel |
From: Greg F. <fre...@No...> - 2002-06-10 20:05:06
|
David, If you guys have any political clout, you might want to support Robin Holt's = proposal to Redhat for rc.sysinit restructuring. I just quoted her/his summarization e-mail in a response to Aneesh. The basic = idea is to restructure rc.sysinit in a way similar to how rc*.d are done. If it was in place, it would be relatively easy to patch it to support an = overall cluster sysinit.d that only gets invoked on the first booting node and = a separate sysinit.nodeup.d that contains scripts the get invoked for each node = that comes up. Greg Freemyer Internet Engineer Deployment and Integration Specialist Compaq ASE - Tru64 Compaq Master ASE - SAN Architect The Norcross Group www.NorcrossGroup.com >> Below is something I wrote up last week, but was waiting for Bruce to=20 >> comment on it before sending it out. Now that I see what you've done=20 >> with /etc/init.d/clusterinit, I thought I'd send this out. I will=20 >> examine what you've done today. >> Last week I did something similiar. I wasn't as concerned with the=20 >> dependent node networking, but I wanted to replace rc.sysinit for=20 >> dependent nodes only. I copied the redhat rc.sysinit to=20 >> rc.sysinit.nodeup and removed all the things which the dependent nodes=20 >> should not be duplicating. I also removed the execution of rc for run=20 >> level 3 from rc.nodeup. Keep in mind that only the first booting node=20 >> runs rc.sysinit just like base linux. Since only dependent nodes run=20 >> rc.nodeup, only the dependent nodes run rc.sysinit.nodeup. >> --------- >> You've brought up an important architectural issue. Once there is a >> single root it requires clusterization to have duplicate services >> running. One way to clusterize things is adding context dependent links >> (i.e. /var/run as you proposed for the *.pid files). >> The current set-up of having rc.nodeup call rc.sysinit then running >> complete rc 3 runlevel processing was fine when we had a non-shared >> root. Now with CFS and GFS we really need to NOT do this. Looking at >> rc.sysinit on a redhat install, I see that it does all sorts of stuff >> which should NOT be done again on a joining node in the shared-root case. >> In a cluster there would generally be two kinds of services. The first >> kind is a single instance of the service (single process or set of >> processes on one node) running with keepalive to restart it on node >> failures. The second kind is the service that is cluster aware, so that >> processes could exist on multiple nodes, but they cooperate with each >> other. In non-stop clusters we parallelized inetd, for example. It >> maintained processes on all nodes, and kept a list of pids which it >> updated as nodes came and went. >> The whole /var/run/service_name.pid mechanism I would propose is only >> used for non-cluster aware serives which are restricted to running on >> the root node, but may be restarted on node failure. It is assumed that >> to restart the service we might have to remove the .pid file and then on >> (re)start the service would create the file again with the new pid. >> Aneesh Kumar K.V wrote: >> > Hi, >> > >> > I guess we need to have node specific /var/run directory also. >> > >> > Otherwise on debian some sevices may not come up on node2. They check >> > /var/run/service_name.pid file to see whether the service is already >> > running or not. >> > >> > That make it for debian /etc/init.d/rcS add these lines before doing >> > the for loop show below >> > >> > # >> > # Cluster specific remounts. >> > # >> > # >> > mount --bind /etc/network-`/usr/sbin/clusternode_num` /etc/network >> > mount --bind /run-`/usr/sbin/clusternode_num` /var/run >> > >> > # >> > # Call all parts in order. >> > # >> > for i in /etc/rcS.d/S??* >> > >> > >> > -aneesh >> > >> > >> > >> > >> > >> > _______________________________________________________________ >> > >> > Don't miss the 2002 Sprint PCS Application Developer's Conference >> > August 25-28 in Las Vegas -- http://devcon.sprintpcs.com/adp/index.cfm >> > >> > _______________________________________________ >> > ssic-linux-devel mailing list >> > ssi...@li... >> > https://lists.sourceforge.net/lists/listinfo/ssic-linux-devel >> > >> > >> --=20 >> David B. Zafman | Hewlett-Packard Company >> Linux Kernel Developer | Open SSI Clustering Project >> mailto:dav...@hp... | http://www.hp.com >> "Thus spake the master programmer: When you have learned to snatch >> the error code from the trap frame, it will be time for you to leave." >> _______________________________________________________________ >> Don't miss the 2002 Sprint PCS Application Developer's Conference >> August 25-28 in Las Vegas - >> http://devcon.sprintpcs.com/adp/index.cfm?source=3Dosdntextlink >> _______________________________________________ >> ci-linux-devel mailing list >> ci-...@li... >> https://lists.sourceforge.net/lists/listinfo/ci-linux-devel |