Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Right-click on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2003 |
Jan
(17) |
Feb
(23) |
Mar
(32) |
Apr
(48) |
May
(51) |
Jun
(23) |
Jul
(39) |
Aug
(47) |
Sep
(107) |
Oct
(112) |
Nov
(112) |
Dec
(70) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(155) |
Feb
(283) |
Mar
(200) |
Apr
(107) |
May
(73) |
Jun
(171) |
Jul
(127) |
Aug
(119) |
Sep
(91) |
Oct
(116) |
Nov
(175) |
Dec
(143) |
2005 |
Jan
(168) |
Feb
(237) |
Mar
(222) |
Apr
(183) |
May
(111) |
Jun
(153) |
Jul
(123) |
Aug
(43) |
Sep
(95) |
Oct
(179) |
Nov
(95) |
Dec
(119) |
2006 |
Jan
(39) |
Feb
(33) |
Mar
(133) |
Apr
(69) |
May
(22) |
Jun
(40) |
Jul
(33) |
Aug
(32) |
Sep
(34) |
Oct
(10) |
Nov
(8) |
Dec
(18) |
2007 |
Jan
(14) |
Feb
(3) |
Mar
(13) |
Apr
(16) |
May
(15) |
Jun
(8) |
Jul
(20) |
Aug
(25) |
Sep
(17) |
Oct
(10) |
Nov
(8) |
Dec
(13) |
2008 |
Jan
(7) |
Feb
|
Mar
(1) |
Apr
(6) |
May
(15) |
Jun
(22) |
Jul
(22) |
Aug
(5) |
Sep
(5) |
Oct
(17) |
Nov
(3) |
Dec
(1) |
2009 |
Jan
(2) |
Feb
|
Mar
(29) |
Apr
(78) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
(21) |
Nov
(1) |
Dec
(4) |
2010 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(5) |
May
(7) |
Jun
(14) |
Jul
(5) |
Aug
(72) |
Sep
(25) |
Oct
(5) |
Nov
(14) |
Dec
(12) |
2011 |
Jan
(9) |
Feb
|
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(10) |
Aug
(18) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(1) |
3
(1) |
4
(3) |
5
(1) |
6
|
7
|
8
|
9
|
10
|
11
|
12
(4) |
13
|
14
|
15
|
16
(1) |
17
|
18
(1) |
19
(3) |
20
(4) |
21
(2) |
22
(1) |
23
(1) |
24
|
25
|
26
(1) |
27
(1) |
28
(2) |
29
|
30
(2) |
31
(3) |
|
|
From: Roger Tsang <roger.tsang@gm...> - 2006-08-28 23:13:08
|
About your CFS performance problem I don't run into the same problem on my 2 node cluster and my cluster is not half as powerful as yours - just SATA 7200rpm disks and UP's. When copying whole directories about 1GB each into another directory on the same filesystem (chard mount), I get the following. I know it's kinda crude test, but it clearly doens't slow down to 2-3MB/sec on my cluster. File I/O on hard mount is slower than soft mount because hard mounts guarantee data has been written to support filesystem failover. Copy operation on just one node: real 0m37.734s user 0m0.093s sys 0m3.515s Copy operation when there is another copy operation on the 2nd node at the same time: real 0m57.153s user 0m0.092s sys 0m3.448s It doesn't slow down to 2-3MB/sec. I also have QoS (HTB+SFQ) on the ICS network interfaces putting things like ICMP and UDP ICS related traffic at highest priority. Maybe that helps. Roger On 8/28/06, fxdg@... <fxdg@...> wrote: > Hello guys > > (First of all sorry for my bad english, I will try to do the best I can) > > I've installed, 4 weeks ago, openSSI 1.9.2 on Fedora Core3, following all the instruction of the various README.* > > Initially I've a lot of difficulties due to the documentation not update for FC3 and SSI1.9.2 (especially regarding the DRBD) but, to the end I've just installed a 4 node SSI cluster and I have some problems that I would submitted to you folks. > > The cluster is a 4 node: > > node1 (init): 4-way 3,16 GHz Xeon with 1 MB L2 cache and 8 GB RAM > node2 (init): 2x2 core 2.80 GHz 2 MB L2 cache and 12 GB RAM > node3: 2-way 3.00 GHz 1 MB L2 cache and 8 GB RAM > node4: 2-way 3.00 GHz 1 MB L2 cache and 8 GB RAM > > Each node have 2 NIC, one connected to the "public" network and the other one connected to the "interconnect" network segment. > > HA-CVIP is configured on 2 the primary init node, and all the cluster is seen from the public network with only 1 IP address and the connection load balancing is working well. > > Each network segment are full duplex 1 GB/s, and, clearly, the interconnect network is a dedicated segment connected to a private switch (a 3Com switch 1 GB/s) > > On the inits node, will be connected a 4.0 TB SAN in order to have root and home failover (at the moment, the cluster is configured with root failover but without the SAN attached so, il failover occour all the cluster will going down). > > I've notice some strange behaviour and I'm wondering if some of you folks can help me: > > 1) The I/O of the entire cluster is quite slow; if more than 1 user try to do some massive I/O, in read or write (for example, a cvs checkout of 3 GB module) the entire cluster performance will be affected; I've done a lots of tests, but the results seems that the I/O throught CFS is quite slow (for example, if I try an scp copy from the public network, my scp copy will be load-balanced from CVIP to one of the 4 nodes and I have a throughput of 40-50 MB/s; if another user try to do the same, concurrent scp, the network transfer go down to 2-3 MB/s.... in order to exclude a network problem, I log into the cluster, and try a CP from a directory to another; the transfer rate is about 25-30 MB/s, if I try to add another cp (or, whaever I/O) during the cp, the throughput go down to 2/3 MB/s and the entire cluster is completely in stuck. Of course, the disks on the inits node are Ultra320, 15000 rpm disks (so I expect better performance). > > 2) If I use a clusternode_shutdown -t0 -h -N2 now (for example, but the beahviour is the same on all the node), the kernel node panic. No problem with a clusternode_shutdown -t0 -r -Nxxx > > 3) The process load-balancing seems that doesn't balance the load in equal part on all the nodes; on my cluster, the node1 (the init node) is always much loaded than the other node > > 4) Randomly, one of the 2 normal (not init) node joining the cluster, panic. > > 5) Java and cvs pserver processes is not migrating at all (into dmesg I see a message like "the process has exited" or something like that); CVS migrate but is not working anymore (socket migration problem?) > > 6) Why the kernel is not compiled with BIG memory support? The are some technical reason? > > > Thanks to anyone would help me. > > Regards Pete > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Ssic-linux-users mailing list > Ssic-linux-users@... > https://lists.sourceforge.net/lists/listinfo/ssic-linux-users > |
From: <fxdg@ma...> - 2006-08-28 08:11:17
|
Hello guys (First of all sorry for my bad english, I will try to do the best I can) I've installed, 4 weeks ago, openSSI 1.9.2 on Fedora Core3, following all the instruction of the various README.* Initially I've a lot of difficulties due to the documentation not update for FC3 and SSI1.9.2 (especially regarding the DRBD) but, to the end I've just installed a 4 node SSI cluster and I have some problems that I would submitted to you folks. The cluster is a 4 node: node1 (init): 4-way 3,16 GHz Xeon with 1 MB L2 cache and 8 GB RAM node2 (init): 2x2 core 2.80 GHz 2 MB L2 cache and 12 GB RAM node3: 2-way 3.00 GHz 1 MB L2 cache and 8 GB RAM node4: 2-way 3.00 GHz 1 MB L2 cache and 8 GB RAM Each node have 2 NIC, one connected to the "public" network and the other one connected to the "interconnect" network segment. HA-CVIP is configured on 2 the primary init node, and all the cluster is seen from the public network with only 1 IP address and the connection load balancing is working well. Each network segment are full duplex 1 GB/s, and, clearly, the interconnect network is a dedicated segment connected to a private switch (a 3Com switch 1 GB/s) On the inits node, will be connected a 4.0 TB SAN in order to have root and home failover (at the moment, the cluster is configured with root failover but without the SAN attached so, il failover occour all the cluster will going down). I've notice some strange behaviour and I'm wondering if some of you folks can help me: 1) The I/O of the entire cluster is quite slow; if more than 1 user try to do some massive I/O, in read or write (for example, a cvs checkout of 3 GB module) the entire cluster performance will be affected; I've done a lots of tests, but the results seems that the I/O throught CFS is quite slow (for example, if I try an scp copy from the public network, my scp copy will be load-balanced from CVIP to one of the 4 nodes and I have a throughput of 40-50 MB/s; if another user try to do the same, concurrent scp, the network transfer go down to 2-3 MB/s.... in order to exclude a network problem, I log into the cluster, and try a CP from a directory to another; the transfer rate is about 25-30 MB/s, if I try to add another cp (or, whaever I/O) during the cp, the throughput go down to 2/3 MB/s and the entire cluster is completely in stuck. Of course, the disks on the inits node are Ultra320, 15000 rpm disks (so I expect better performance). 2) If I use a clusternode_shutdown -t0 -h -N2 now (for example, but the beahviour is the same on all the node), the kernel node panic. No problem with a clusternode_shutdown -t0 -r -Nxxx 3) The process load-balancing seems that doesn't balance the load in equal part on all the nodes; on my cluster, the node1 (the init node) is always much loaded than the other node 4) Randomly, one of the 2 normal (not init) node joining the cluster, panic. 5) Java and cvs pserver processes is not migrating at all (into dmesg I see a message like "the process has exited" or something like that); CVS migrate but is not working anymore (socket migration problem?) 6) Why the kernel is not compiled with BIG memory support? The are some technical reason? Thanks to anyone would help me. Regards Pete |