You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(227) |
Sep
(185) |
Oct
(259) |
Nov
(168) |
Dec
(163) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(94) |
Feb
(92) |
Mar
(121) |
Apr
(83) |
May
(158) |
Jun
(72) |
Jul
(150) |
Aug
(64) |
Sep
(81) |
Oct
(98) |
Nov
(79) |
Dec
(27) |
2004 |
Jan
(93) |
Feb
(81) |
Mar
(85) |
Apr
(43) |
May
(71) |
Jun
(28) |
Jul
(89) |
Aug
(156) |
Sep
(51) |
Oct
(50) |
Nov
(48) |
Dec
(56) |
2005 |
Jan
(59) |
Feb
(180) |
Mar
(68) |
Apr
(58) |
May
(44) |
Jun
(59) |
Jul
(50) |
Aug
(103) |
Sep
(100) |
Oct
(66) |
Nov
(41) |
Dec
(33) |
2006 |
Jan
(41) |
Feb
(51) |
Mar
(133) |
Apr
(66) |
May
(40) |
Jun
(34) |
Jul
(86) |
Aug
(28) |
Sep
(62) |
Oct
(54) |
Nov
(24) |
Dec
(23) |
2007 |
Jan
(72) |
Feb
(81) |
Mar
(33) |
Apr
(64) |
May
(23) |
Jun
(67) |
Jul
(33) |
Aug
(54) |
Sep
(38) |
Oct
(40) |
Nov
(108) |
Dec
(84) |
2008 |
Jan
(49) |
Feb
(44) |
Mar
(65) |
Apr
(43) |
May
(75) |
Jun
(171) |
Jul
(121) |
Aug
(86) |
Sep
(189) |
Oct
(326) |
Nov
(172) |
Dec
(178) |
2009 |
Jan
(86) |
Feb
(154) |
Mar
(159) |
Apr
(112) |
May
(113) |
Jun
(64) |
Jul
(147) |
Aug
(170) |
Sep
(157) |
Oct
(153) |
Nov
(149) |
Dec
(184) |
2010 |
Jan
(196) |
Feb
(234) |
Mar
(191) |
Apr
(233) |
May
(95) |
Jun
(200) |
Jul
(134) |
Aug
(189) |
Sep
(158) |
Oct
(135) |
Nov
(104) |
Dec
(135) |
2011 |
Jan
(101) |
Feb
(142) |
Mar
(157) |
Apr
(142) |
May
(145) |
Jun
(195) |
Jul
(306) |
Aug
(268) |
Sep
(128) |
Oct
(80) |
Nov
(125) |
Dec
(112) |
2012 |
Jan
(93) |
Feb
(125) |
Mar
(94) |
Apr
(102) |
May
(134) |
Jun
(85) |
Jul
(80) |
Aug
(130) |
Sep
(104) |
Oct
(104) |
Nov
(133) |
Dec
(107) |
2013 |
Jan
(136) |
Feb
(127) |
Mar
(172) |
Apr
(183) |
May
(158) |
Jun
(84) |
Jul
(132) |
Aug
(143) |
Sep
(46) |
Oct
(94) |
Nov
(42) |
Dec
(61) |
2014 |
Jan
(248) |
Feb
(89) |
Mar
(93) |
Apr
(102) |
May
(59) |
Jun
(44) |
Jul
(131) |
Aug
(69) |
Sep
(199) |
Oct
(88) |
Nov
(38) |
Dec
(59) |
2015 |
Jan
(54) |
Feb
(57) |
Mar
(70) |
Apr
(71) |
May
(63) |
Jun
(79) |
Jul
(85) |
Aug
(106) |
Sep
(69) |
Oct
(72) |
Nov
(48) |
Dec
(28) |
2016 |
Jan
(42) |
Feb
(70) |
Mar
(89) |
Apr
(87) |
May
(114) |
Jun
(57) |
Jul
(47) |
Aug
(60) |
Sep
(38) |
Oct
(36) |
Nov
(12) |
Dec
(28) |
2017 |
Jan
(32) |
Feb
(44) |
Mar
(135) |
Apr
(101) |
May
(98) |
Jun
(42) |
Jul
(54) |
Aug
(21) |
Sep
(23) |
Oct
(83) |
Nov
(89) |
Dec
(15) |
2018 |
Jan
(18) |
Feb
(2) |
Mar
(35) |
Apr
(12) |
May
(52) |
Jun
(103) |
Jul
(65) |
Aug
(35) |
Sep
(47) |
Oct
(81) |
Nov
(86) |
Dec
(44) |
2019 |
Jan
(34) |
Feb
(63) |
Mar
(58) |
Apr
(21) |
May
(39) |
Jun
(30) |
Jul
(43) |
Aug
(22) |
Sep
(26) |
Oct
(62) |
Nov
(39) |
Dec
(47) |
2020 |
Jan
(40) |
Feb
(27) |
Mar
(30) |
Apr
(20) |
May
(42) |
Jun
(24) |
Jul
(60) |
Aug
(26) |
Sep
(60) |
Oct
(29) |
Nov
(15) |
Dec
(7) |
2021 |
Jan
(34) |
Feb
(31) |
Mar
(54) |
Apr
(8) |
May
(40) |
Jun
(49) |
Jul
(14) |
Aug
(26) |
Sep
(25) |
Oct
(13) |
Nov
(46) |
Dec
(19) |
2022 |
Jan
(45) |
Feb
(8) |
Mar
(20) |
Apr
(25) |
May
(8) |
Jun
(12) |
Jul
(10) |
Aug
(11) |
Sep
(4) |
Oct
(11) |
Nov
(3) |
Dec
(3) |
2023 |
Jan
|
Feb
(25) |
Mar
(7) |
Apr
(16) |
May
(7) |
Jun
(8) |
Jul
(31) |
Aug
(11) |
Sep
(32) |
Oct
(18) |
Nov
(25) |
Dec
(6) |
2024 |
Jan
(48) |
Feb
(31) |
Mar
(7) |
Apr
(1) |
May
(22) |
Jun
(8) |
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: v.pennington at man.ac.uk (V. Pennington) - 2003-08-05 08:18:18
|
Hi, Does anyone know of any documentation on uninstalling GPFS from a cluster? Or can anyone give any advice? GPFS was installed before there were xCAT scripts available, so it should be a standard installation... Thanks! Victoria --- Dr Victoria Pennington Manchester Computing, Kilburn Building, University of Manchester, Oxford Road, Manchester M13 9PL tel. 0161 275 6830, email: v.p...@ma... |
From: garrick at usc.e. (garrick) - 2003-08-04 22:58:34
|
On Mon, Aug 04, 2003 at 05:54:35PM -0400, Wolfram Tempel alleged: > Hi all, > Egan Ford and others have kindly responded to my question a few weeks > ago regarding dynamic ssh access configuration on the compute nodes > based on PBS. > I have established that ACCESS was indeed enabled in noderes.tab during > installation of x-cat. > Since I want users to prevent from accessing the nodes unless one of > their jobs is running on a node, I added a line > > echo "-:ALL EXCEPT root:ALL">>/etc/security/access.conf > > to the compute nodes (another xcat subscriber's suggestion). This way, I The access parameter in noderes.tab is passed to the kickstart script, it should have added that line to access.conf, created an access.conf.BOOT, added a line to rc.local that overwrites access.conf with access.conf.BOOT at boot on each node. > hoped to avoid having to install x-cat from scratch (still a scary > thought to me). Installing xcat from scratch can take some time, but reinstalling nodes should be mundane. You should definitly feel very comfortable with reinstalling the OS on compute nodes. > It initially prevented general users from accessing the compute nodes > through ssh until a few days ago I noticed that access.conf on the nodes > has been replaced by an empty file (as soon as a job runs on the compute > node??). > I checked /var/spool/pbs/mom_priv for prologue/epilogue scripts and > found none. I next consider to write these scripts for the appropriate > access.conf updates. Look again at prologue/epilogue, you'll notice that they run setupnode/takedownnode on each node. Those are the scripts that add and remove the username to access.conf before and after each job. > First I would like to make sure I understand what the ACCESS setting in > noderes.tab does. Does it affect pbs compile time options? Or is it > supposed to set up prologue/epilogue scripts? It has nothing to do with PBS compile options, it merely used in kickstart to add the pam hooks to use access.conf and a few other configurations when a node is installed. You should definitely look through the kickstart, prologue/epilogue, and setupnode, takedownnode scripts to see what they do. It's a lot of information to absorb at once, but the overall process isn't nearly as mysterious as it seems :) |
From: tempel at sgx3.bmb.uga.e. (W. Tempel) - 2003-08-04 15:54:35
|
Hi all, Egan Ford and others have kindly responded to my question a few weeks ago regarding dynamic ssh access configuration on the compute nodes based on PBS. I have established that ACCESS was indeed enabled in noderes.tab during installation of x-cat. Since I want users to prevent from accessing the nodes unless one of their jobs is running on a node, I added a line echo "-:ALL EXCEPT root:ALL">>/etc/security/access.conf to the compute nodes (another xcat subscriber's suggestion). This way, I hoped to avoid having to install x-cat from scratch (still a scary thought to me). It initially prevented general users from accessing the compute nodes through ssh until a few days ago I noticed that access.conf on the nodes has been replaced by an empty file (as soon as a job runs on the compute node??). I checked /var/spool/pbs/mom_priv for prologue/epilogue scripts and found none. I next consider to write these scripts for the appropriate access.conf updates. First I would like to make sure I understand what the ACCESS setting in noderes.tab does. Does it affect pbs compile time options? Or is it supposed to set up prologue/epilogue scripts? Many thanks. Wolfram Tempel Univ of Georgia |
From: owsleyd at us.ibm.c. (D. Owsley) - 2003-08-03 23:30:53
|
I will be out of the office starting July 31, 2003 and will not return until August 7, 2003. If you need immediate assistance, contact my backup Jeff Henley at he...@us... |
From: walter_bernocchi at it.ibm.c. (W. Bernocchi) - 2003-08-03 23:11:45
|
I will be out of the office starting August 3, 2003 and will not return until September 3, 2003. I'm on vacation and have no access to the e-mail and cellphone. |
From: ron_chen_123 at yahoo.c. (R. Chen) - 2003-08-03 22:57:52
|
May be old news: SGEp4 released, and DRMAA (supported by SGE, PBS, Condor, and Loadleveler) partly supported: http://gridengine.sunsource.net/project/gridengine/news.html -Ron __________________________________ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com |
From: urbanski at us.ibm.c. (J. Urbanski) - 2003-07-31 11:14:32
|
It was a customer requirement. This is a research system and they wanted two different architectures to develop their codes on. Jay Urbanski Linux Cluster Architect WW Technical Sales (817)741-3500 TL 450-8796 (214)-33-3483 Fax (817)821-6397 Mobile urb...@us... "Customers don't like control points; they like open standards." - John Callies Thomas Davis <ta...@lb...> To: xca...@li... cc: 07/31/2003 11:59 Subject: Re: [xcat-user] Re: Worlds most power linux cluster AM Please respond to xcat-user Mike S Galicki wrote: > Anyone know if the new Japanese Linux cluster will be using xCAT? Don't > think CSM supports 64bit operating systems. BTW: #6 on top500, IBM's > fastest Linux cluster to date was built and tested with xCAT. LLNL has > its own tool YAKI, which eventually replaced xCAT. It can be found here: > http://www.llnl.gov/linux/yaci/yaci.html > > Mike Galicki > Technical Consultant > IGS Americas Linux Services Team > San Francisco, CA > Cellular Phone 415-613-4289 (preferred) > 415-545-4122 T/L 473-4122 > Internet ID: mga...@us... > And can anyone explain why the mix of IA64 with Opteron64? thomas |
From: tadavis at lbl.g. (T. Davis) - 2003-07-31 10:59:02
|
Mike S Galicki wrote: > Anyone know if the new Japanese Linux cluster will be using xCAT? Don't > think CSM supports 64bit operating systems. BTW: #6 on top500, IBM's > fastest Linux cluster to date was built and tested with xCAT. LLNL has > its own tool YAKI, which eventually replaced xCAT. It can be found here: > http://www.llnl.gov/linux/yaci/yaci.html > > Mike Galicki > Technical Consultant > IGS Americas Linux Services Team > San Francisco, CA > Cellular Phone 415-613-4289 (preferred) > 415-545-4122 T/L 473-4122 > Internet ID: mga...@us... > And can anyone explain why the mix of IA64 with Opteron64? thomas |
From: urbanski at us.ibm.c. (J. Urbanski) - 2003-07-31 09:45:07
|
No, it is not. The RFP required a single management solution across both Opteron and IA64 and CSM does not support IA64. CSM was not bid. Jay Urbanski Linux Cluster Architect WW Technical Sales (817)741-3500 TL 450-8796 (214)-33-3483 Fax (817)821-6397 Mobile urb...@us... "Customers don't like control points; they like open standards." - John Callies Laura Merritt/Poughkeepsi To: xca...@li... e/IBM@IBMUS cc: Subject: Re: [xcat-user] Re: Worlds most power linux cluster 07/31/2003 06:47 AM Please respond to xcat-user This is a CSM bid, CSM will be supporting 64bit Opteron with the 1350 later this year. Laura J. Merritt Mgr. Dept 85CA -Clusters System Services and Security tieline 293-7972 outside 433-7972 Mike S Galicki/San To: xca...@li... Francisco/IBM@IBM cc: US Subject: [xcat-user] Re: Worlds most power linux cluster 07/31/2003 12:49 AM Please respond to xcat-user Anyone know if the new Japanese Linux cluster will be using xCAT? Don't think CSM supports 64bit operating systems. BTW: #6 on top500, IBM's fastest Linux cluster to date was built and tested with xCAT. LLNL has its own tool YAKI, which eventually replaced xCAT. It can be found here: http://www.llnl.gov/linux/yaci/yaci.html Mike Galicki Technical Consultant IGS Americas Linux Services Team San Francisco, CA Cellular Phone 415-613-4289 (preferred) 415-545-4122 T/L 473-4122 Internet ID: mga...@us... |
From: ljm at us.ibm.c. (L. Merritt) - 2003-07-31 05:47:16
|
This is a CSM bid, CSM will be supporting 64bit Opteron with the 1350 later this year. Laura J. Merritt Mgr. Dept 85CA -Clusters System Services and Security tieline 293-7972 outside 433-7972 Mike S Galicki/San To: xca...@li... Francisco/IBM@IBM cc: US Subject: [xcat-user] Re: Worlds most power linux cluster 07/31/2003 12:49 AM Please respond to xcat-user Anyone know if the new Japanese Linux cluster will be using xCAT? Don't think CSM supports 64bit operating systems. BTW: #6 on top500, IBM's fastest Linux cluster to date was built and tested with xCAT. LLNL has its own tool YAKI, which eventually replaced xCAT. It can be found here: http://www.llnl.gov/linux/yaci/yaci.html Mike Galicki Technical Consultant IGS Americas Linux Services Team San Francisco, CA Cellular Phone 415-613-4289 (preferred) 415-545-4122 T/L 473-4122 Internet ID: mga...@us... |
From: urbanski at us.ibm.c. (J. Urbanski) - 2003-07-30 23:11:50
|
Yes it will run xCAT. Jay Urbanski Linux Cluster Architect WW Technical Sales (817)741-3500 TL 450-8796 (214)-33-3483 Fax (817)821-6397 Mobile urb...@us... "Customers don't like control points; they like open standards." - John Callies |---------+----------------------------> | | Mike S | | | Galicki/San | | | Francisco/IBM@IBM| | | US | | | | | | 07/30/2003 11:49 | | | PM | | | Please respond to| | | xcat-user | |---------+----------------------------> >---------------------------------------------------------------------------------------------------------------------| | | | To: xca...@li... | | cc: | | Subject: [xcat-user] Re: Worlds most power linux cluster | >---------------------------------------------------------------------------------------------------------------------| Anyone know if the new Japanese Linux cluster will be using xCAT? Don't think CSM supports 64bit operating systems. BTW: #6 on top500, IBM's fastest Linux cluster to date was built and tested with xCAT. LLNL has its own tool YAKI, which eventually replaced xCAT. It can be found here: http://www.llnl.gov/linux/yaci/yaci.html Mike Galicki Technical Consultant IGS Americas Linux Services Team San Francisco, CA Cellular Phone 415-613-4289 (preferred) 415-545-4122 T/L 473-4122 Internet ID: mga...@us... |
From: mgalicki at us.ibm.c. (M. S Galicki) - 2003-07-30 22:49:30
|
Anyone know if the new Japanese Linux cluster will be using xCAT? Don't think CSM supports 64bit operating systems. BTW: #6 on top500, IBM's fastest Linux cluster to date was built and tested with xCAT. LLNL has its own tool YAKI, which eventually replaced xCAT. It can be found here: http://www.llnl.gov/linux/yaci/yaci.html Mike Galicki Technical Consultant IGS Americas Linux Services Team San Francisco, CA Cellular Phone 415-613-4289 (preferred) 415-545-4122 T/L 473-4122 Internet ID: mga...@us... |
From: egan at sense.n. (E. Ford) - 2003-07-30 00:17:37
|
Older versions of SSH supported the none cipher. OpenSSH does not. It is still in the code (or was) but commented out, their reason for doing that was to protect us from ourselves. SSH is used in xCAT because it support non privileged ports increasing scalability beyond 60000 processes not for security. I researched the none cipher issue awhile ago because I wanted it to be the default. I never got around to patching back in the none cipher. You're stuck with blowfish unless you can get an older or non-OpenSSH SSH. > -----Original Message----- > From: Mark Payba [mailto:mar...@mh...] > Sent: Tuesday, July 29, 2003 7:47 PM > To: xca...@li...; Michael Berning > Subject: Re: [xcat-user] tcp/ip over myrinet > > > All the benchmarks look like they use client/servers to Tx/Rx raw > packets in measuring bandwidth. Since their testing is done with ssh > and more in line with how they intend to implement their application > (ssh or rsh but we are forced to use ssh), I suspect that the ssh > overhead is the cause of the severe drop in performance. Any > ideas on > how to optomize ssh performance besides setting cipher to blowfish? > > Egan Ford wrote: > > >It may be the benchmark. I get excellent performance. Use > the netbench > >benchmark on Myrinet performance page to verify that you do > not have a > >software/hardware problem first. I have used netbench, > iperf, and netpipe > >with GigE and Myrinet. All are good benchmarks that are > well accepted. > > > > > > > >>-----Original Message----- > >>From: Mark Payba [mailto:mar...@mh...] > >>Sent: Friday, July 25, 2003 6:32 PM > >>To: xcat-user > >>Subject: [xcat-user] tcp/ip over myrinet > >> > >> > >>We have a customer doing some throughput testing on our > cluster with > >>myrinets tcp emulation. He's getting numbers just slighty > >>better than > >>on the 100Mb ethernet. I tried changing MTU sizes and such > but can't > >>seem to get better than ~100Mb/sec. hn### is our ethernet > >>hostname and > >>myr### is our GM TCP/IP hostname. > >> > >>Board number 0: > >> lanai_clockval = 0x082082a0 > >> lanai_cpu_version = 0x0900 (LANai9.0) > >> lanai_board_id = 00:60:dd:7f:27:87 > >> lanai_sram_size = 0x00200000 (2048K bytes) > >> max_lanai_speed = 134 MHz > >> product_code = 111 > >> serial_number = 110628 > >> (should be labeled: "M3F-PCI64B-2-110628") > >>LANai time is 0x11fea4dd229 ticks, or about 9213 minutes > since reset. > >>This is node 1 (hn001) node_type=0 > >>Board has room for 8 ports, 3000 nodes/routes, 32768 cache entries > >> Port token cnt: send=29, recv=248 > >>Port: Status PID > >> 0: BUSY 20579 (this process [gm_board_info]) > >> 3: BUSY -1 > >>Route table for this node follows: > >>The mapper 48-bit ID was: 00:60:dd:7f:27:87 > >>gmID MAC Address gmName Route > >>---- ----------------- -------------------------------- > >>--------------------- > >> 1 00:60:dd:7f:27:87 hn001 80 > >>(this node) > >>(mapper) > >> 2 00:60:dd:7f:27:07 hn002 81 > >> > >> > >>[root@hn001 root]# ifconfig myri0 > >>myri0 Link encap:Ethernet HWaddr 00:60:DD:7F:27:87 > >> inet addr:192.168.0.101 Bcast:192.168.1.255 > >>Mask:255.255.254.0 > >> UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 > >> RX packets:273407 errors:0 dropped:0 overruns:0 frame:0 > >> TX packets:776999 errors:0 dropped:0 overruns:0 carrier:0 > >> collisions:0 txqueuelen:100 > >> RX bytes:24415518 (23.2 Mb) TX bytes:3159347428 > (3012.9 Mb) > >> Interrupt:18 > >> > >>their test over 100Mb ethernet > >>hn001[/u/giebink]%pipesend 500000 1024 | ssh hn032 piperecv > >># pipesend : 512000000 bytes transferred in 44.43205297 > >>seconds: rate > >>= 92.18570213 Mbps > >># piperecv : 512000000 bytes transferred in 44.12887299 > >>seconds: rate > >>= 92.81904845 Mbps > >> > >>their test over TCP/IP GM > >>hn001[/u/giebink]%pipesend 500000 1024 | ssh myr032 piperecv > >>4630: Warning: Permanently added 'myr032' (RSA) to the list > >>of known hosts. > >># pipesend : 512000000 bytes transferred in 39.42322505 > >>seconds: rate > >>= 103.89814621 Mbps > >># piperecv : 512000000 bytes transferred in 38.99614394 > >>seconds: rate > >>= 105.03602630 Mbps > >> > >>thier test on their cluster with GigE > >> > >>poi01% pipesend 500000 1024 | ssh poi02 piperecv > >> > >> > >>>>kaiser@poi02's password: > >>>># pipesend : 512000000 bytes transferred in 26.62851000 > >>>> > >>>> > >>seconds: rate = > >> > >> > >>>>153.82009734 Mbps > >>>># piperecv : 512000000 bytes transferred in 23.04195297 > >>>> > >>>> > >>seconds: rate = > >> > >> > >>>>177.76270986 Mbps > >>>> > >>>> > >>with rsh instead of ssh > >> > >> > >> > >>>>poi01% pipesend 500000 1024 | rsh poi03 piperecv > >>>># pipesend : 512000000 bytes transferred in 6.57280993 > >>>> > >>>> > >>seconds: rate = > >> > >> > >>>>623.17335215 Mbps > >>>># piperecv : 512000000 bytes transferred in 6.49636197 > >>>> > >>>> > >>seconds: rate = > >> > >> > >>>>630.50673875 Mbps > >>>> > >>>> > >>Using rsh instead of ssh is faster in the last 2 cases but > >>ssh over GigE > >>is still even faster than ssh over GM tcp/ip > >> > >> > >> > > > > > > > > > |
From: egan at sense.n. (E. Ford) - 2003-07-30 00:04:58
|
A few tips: 1. Run gm_debug to verify that your PCI bus can even do 250MB/s (2 Gb/s). Even if gm_debug reports 250MB/s, your PCI bus may not stream at that rate. E.g. my still beta P3 866Mhz x330 reports 228MB/s read and 251MB/s write, but I know from experience that it can only stream 150MB/s (1.2Gb/s) using GM, with IP it will be less. 996Mb/s for a B card in an old x330 is probably normal. Using B cards, and 2 dual 866Mhz machines with GM 1.6.4 and RH 7.3 with the 2.4.20-18.7smp kernel and no IP tuning I get 832 Mb/s and my CPU utilization was at 45% for both processors on each node. Using D cards in my IA64 nodes with PCI-X slots I have no problem getting 1975-2000 Gb/s. 2. GM 2.0.x on B and C cards does not perform well. Use GM 1.6.4 unless you have a mix of B/C and D cards. GM 2.0.x on D cards rocks. 3. Read the GM 1.6.4 README.linux file for tips on increasing IP/GM performance. Older versions of xCAT did this as part of install. Newer versions do not. 4. IP/GM uses more CPU. Use netbench with the CPU flags to check utilization. Use the Myrinet IP performance page for the netbench command to run. -----Original Message----- From: Mark Payba [mailto:mar...@mh...] Sent: Tuesday, July 29, 2003 8:36 PM To: xca...@li... Subject: Re: [xcat-user] tcp/ip over myrinet There must be something wrong with the way we have it setup, it looks like your getting double our throughput: server side: [root@hn033 iperf-1.7.0]# ./iperf -s -B myr033 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.0.133 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 6] local 192.168.0.133 port 5001 connected with 192.168.0.101 port 45030 [ ID] Interval Transfer Bandwidth [ 6] 0.0-60.0 sec 6.96 GBytes 996 Mbits/sec client side: [root@hn001 iperf-1.7.0]# ./iperf -c myr033 -t 60 ------------------------------------------------------------ Client connecting to myr033, TCP port 5001 TCP window size: 27.2 KByte (default) ------------------------------------------------------------ [ 5] local 192.168.0.101 port 45030 connected with 192.168.0.133 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-60.0 sec 6.96 GBytes 996 Mbits/sec If you don't mind my asking for comparison purposes, what type/speed of processors are you running, gm and kernel version and output of ifconfig on myri0? Mahalo, Mark Matthew Bohnsack wrote: * Mark Payba <mailto:mar...@mh...> <mar...@mh...> [Jul 26, 2003 at 09:07:03AM CDT]: We have a customer doing some throughput testing on our cluster with myrinets tcp emulation. He's getting numbers just slighty better than on the 100Mb ethernet. I tried changing MTU sizes and such but can't seem to get better than ~100Mb/sec. hn### is our ethernet hostname and myr### is our GM TCP/IP hostname. Using iperf (http://dast.nlanr.net/Projects/Iperf/), I get... 100 Mb/s Ethernet: [root@node066 root]# iperf -c node065 -t 60 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 673 MBytes 89.8 Mbits/sec 2 Gb/s Myrinet: [root@node066 root]# iperf -c node065-myri0 -t 60 [ ID] Interval Transfer Bandwidth [ 3] 0.0-60.0 sec 11.6 GBytes 1.6 Gbits/sec [ ... ] -Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.xcat.org/pipermail/xcat-user/attachments/20030730/6cc60772/attachment.htm |
From: scorp at netli.c. (S. Kritov) - 2003-07-29 22:55:55
|
Disable encryption, turn on compression. It does help. Slava -----Original Message----- From: Mark Payba [mailto:mar...@mh...] Sent: Tuesday, July 29, 2003 6:47 PM To: xca...@li...; Michael Berning Subject: Re: [xcat-user] tcp/ip over myrinet All the benchmarks look like they use client/servers to Tx/Rx raw packets in measuring bandwidth. Since their testing is done with ssh and more in line with how they intend to implement their application (ssh or rsh but we are forced to use ssh), I suspect that the ssh overhead is the cause of the severe drop in performance. Any ideas on how to optomize ssh performance besides setting cipher to blowfish? Egan Ford wrote: >It may be the benchmark. I get excellent performance. Use the netbench >benchmark on Myrinet performance page to verify that you do not have a >software/hardware problem first. I have used netbench, iperf, and netpipe >with GigE and Myrinet. All are good benchmarks that are well accepted. > > > >>-----Original Message----- >>From: Mark Payba [mailto:mar...@mh...] >>Sent: Friday, July 25, 2003 6:32 PM >>To: xcat-user >>Subject: [xcat-user] tcp/ip over myrinet >> >> >>We have a customer doing some throughput testing on our cluster with >>myrinets tcp emulation. He's getting numbers just slighty >>better than >>on the 100Mb ethernet. I tried changing MTU sizes and such but can't >>seem to get better than ~100Mb/sec. hn### is our ethernet >>hostname and >>myr### is our GM TCP/IP hostname. >> >>Board number 0: >> lanai_clockval = 0x082082a0 >> lanai_cpu_version = 0x0900 (LANai9.0) >> lanai_board_id = 00:60:dd:7f:27:87 >> lanai_sram_size = 0x00200000 (2048K bytes) >> max_lanai_speed = 134 MHz >> product_code = 111 >> serial_number = 110628 >> (should be labeled: "M3F-PCI64B-2-110628") >>LANai time is 0x11fea4dd229 ticks, or about 9213 minutes since reset. >>This is node 1 (hn001) node_type=0 >>Board has room for 8 ports, 3000 nodes/routes, 32768 cache entries >> Port token cnt: send=29, recv=248 >>Port: Status PID >> 0: BUSY 20579 (this process [gm_board_info]) >> 3: BUSY -1 >>Route table for this node follows: >>The mapper 48-bit ID was: 00:60:dd:7f:27:87 >>gmID MAC Address gmName Route >>---- ----------------- -------------------------------- >>--------------------- >> 1 00:60:dd:7f:27:87 hn001 80 >>(this node) >>(mapper) >> 2 00:60:dd:7f:27:07 hn002 81 >> >> >>[root@hn001 root]# ifconfig myri0 >>myri0 Link encap:Ethernet HWaddr 00:60:DD:7F:27:87 >> inet addr:192.168.0.101 Bcast:192.168.1.255 >>Mask:255.255.254.0 >> UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 >> RX packets:273407 errors:0 dropped:0 overruns:0 frame:0 >> TX packets:776999 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:100 >> RX bytes:24415518 (23.2 Mb) TX bytes:3159347428 (3012.9 Mb) >> Interrupt:18 >> >>their test over 100Mb ethernet >>hn001[/u/giebink]%pipesend 500000 1024 | ssh hn032 piperecv >># pipesend : 512000000 bytes transferred in 44.43205297 >>seconds: rate >>= 92.18570213 Mbps >># piperecv : 512000000 bytes transferred in 44.12887299 >>seconds: rate >>= 92.81904845 Mbps >> >>their test over TCP/IP GM >>hn001[/u/giebink]%pipesend 500000 1024 | ssh myr032 piperecv >>4630: Warning: Permanently added 'myr032' (RSA) to the list >>of known hosts. >># pipesend : 512000000 bytes transferred in 39.42322505 >>seconds: rate >>= 103.89814621 Mbps >># piperecv : 512000000 bytes transferred in 38.99614394 >>seconds: rate >>= 105.03602630 Mbps >> >>thier test on their cluster with GigE >> >>poi01% pipesend 500000 1024 | ssh poi02 piperecv >> >> >>>>kaiser@poi02's password: >>>># pipesend : 512000000 bytes transferred in 26.62851000 >>>> >>>> >>seconds: rate = >> >> >>>>153.82009734 Mbps >>>># piperecv : 512000000 bytes transferred in 23.04195297 >>>> >>>> >>seconds: rate = >> >> >>>>177.76270986 Mbps >>>> >>>> >>with rsh instead of ssh >> >> >> >>>>poi01% pipesend 500000 1024 | rsh poi03 piperecv >>>># pipesend : 512000000 bytes transferred in 6.57280993 >>>> >>>> >>seconds: rate = >> >> >>>>623.17335215 Mbps >>>># piperecv : 512000000 bytes transferred in 6.49636197 >>>> >>>> >>seconds: rate = >> >> >>>>630.50673875 Mbps >>>> >>>> >>Using rsh instead of ssh is faster in the last 2 cases but >>ssh over GigE >>is still even faster than ssh over GM tcp/ip >> >> >> > > > > |
From: mark.payba at mhpcc.hpc.m. (M. Payba) - 2003-07-29 20:35:53
|
There must be something wrong with the way we have it setup, it looks like your getting double our throughput: server side: [root@hn033 iperf-1.7.0]# ./iperf -s -B myr033 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.0.133 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 6] local 192.168.0.133 port 5001 connected with 192.168.0.101 port 45030 [ ID] Interval Transfer Bandwidth [ 6] 0.0-60.0 sec 6.96 GBytes 996 Mbits/sec client side: [root@hn001 iperf-1.7.0]# ./iperf -c myr033 -t 60 ------------------------------------------------------------ Client connecting to myr033, TCP port 5001 TCP window size: 27.2 KByte (default) ------------------------------------------------------------ [ 5] local 192.168.0.101 port 45030 connected with 192.168.0.133 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-60.0 sec 6.96 GBytes 996 Mbits/sec If you don't mind my asking for comparison purposes, what type/speed of processors are you running, gm and kernel version and output of ifconfig on myri0? Mahalo, Mark Matthew Bohnsack wrote: >* Mark Payba <mar...@mh...> [Jul 26, 2003 at 09:07:03AM CDT]: > > >>We have a customer doing some throughput testing on our cluster with >>myrinets tcp emulation. He's getting numbers just slighty better than >>on the 100Mb ethernet. I tried changing MTU sizes and such but can't >>seem to get better than ~100Mb/sec. hn### is our ethernet hostname and >>myr### is our GM TCP/IP hostname. >> >> > >Using iperf (http://dast.nlanr.net/Projects/Iperf/), I get... > >100 Mb/s Ethernet: >[root@node066 root]# iperf -c node065 -t 60 >[ ID] Interval Transfer Bandwidth >[ 3] 0.0-60.0 sec 673 MBytes 89.8 Mbits/sec > > >2 Gb/s Myrinet: >[root@node066 root]# iperf -c node065-myri0 -t 60 >[ ID] Interval Transfer Bandwidth >[ 3] 0.0-60.0 sec 11.6 GBytes 1.6 Gbits/sec > > >[ ... ] > >-Matt > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.xcat.org/pipermail/xcat-user/attachments/20030729/47ecf5f8/attachment.htm |
From: mark.payba at mhpcc.hpc.m. (M. Payba) - 2003-07-29 19:46:58
|
All the benchmarks look like they use client/servers to Tx/Rx raw packets in measuring bandwidth. Since their testing is done with ssh and more in line with how they intend to implement their application (ssh or rsh but we are forced to use ssh), I suspect that the ssh overhead is the cause of the severe drop in performance. Any ideas on how to optomize ssh performance besides setting cipher to blowfish? Egan Ford wrote: >It may be the benchmark. I get excellent performance. Use the netbench >benchmark on Myrinet performance page to verify that you do not have a >software/hardware problem first. I have used netbench, iperf, and netpipe >with GigE and Myrinet. All are good benchmarks that are well accepted. > > > >>-----Original Message----- >>From: Mark Payba [mailto:mar...@mh...] >>Sent: Friday, July 25, 2003 6:32 PM >>To: xcat-user >>Subject: [xcat-user] tcp/ip over myrinet >> >> >>We have a customer doing some throughput testing on our cluster with >>myrinets tcp emulation. He's getting numbers just slighty >>better than >>on the 100Mb ethernet. I tried changing MTU sizes and such but can't >>seem to get better than ~100Mb/sec. hn### is our ethernet >>hostname and >>myr### is our GM TCP/IP hostname. >> >>Board number 0: >> lanai_clockval = 0x082082a0 >> lanai_cpu_version = 0x0900 (LANai9.0) >> lanai_board_id = 00:60:dd:7f:27:87 >> lanai_sram_size = 0x00200000 (2048K bytes) >> max_lanai_speed = 134 MHz >> product_code = 111 >> serial_number = 110628 >> (should be labeled: "M3F-PCI64B-2-110628") >>LANai time is 0x11fea4dd229 ticks, or about 9213 minutes since reset. >>This is node 1 (hn001) node_type=0 >>Board has room for 8 ports, 3000 nodes/routes, 32768 cache entries >> Port token cnt: send=29, recv=248 >>Port: Status PID >> 0: BUSY 20579 (this process [gm_board_info]) >> 3: BUSY -1 >>Route table for this node follows: >>The mapper 48-bit ID was: 00:60:dd:7f:27:87 >>gmID MAC Address gmName Route >>---- ----------------- -------------------------------- >>--------------------- >> 1 00:60:dd:7f:27:87 hn001 80 >>(this node) >>(mapper) >> 2 00:60:dd:7f:27:07 hn002 81 >> >> >>[root@hn001 root]# ifconfig myri0 >>myri0 Link encap:Ethernet HWaddr 00:60:DD:7F:27:87 >> inet addr:192.168.0.101 Bcast:192.168.1.255 >>Mask:255.255.254.0 >> UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 >> RX packets:273407 errors:0 dropped:0 overruns:0 frame:0 >> TX packets:776999 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:100 >> RX bytes:24415518 (23.2 Mb) TX bytes:3159347428 (3012.9 Mb) >> Interrupt:18 >> >>their test over 100Mb ethernet >>hn001[/u/giebink]%pipesend 500000 1024 | ssh hn032 piperecv >># pipesend : 512000000 bytes transferred in 44.43205297 >>seconds: rate >>= 92.18570213 Mbps >># piperecv : 512000000 bytes transferred in 44.12887299 >>seconds: rate >>= 92.81904845 Mbps >> >>their test over TCP/IP GM >>hn001[/u/giebink]%pipesend 500000 1024 | ssh myr032 piperecv >>4630: Warning: Permanently added 'myr032' (RSA) to the list >>of known hosts. >># pipesend : 512000000 bytes transferred in 39.42322505 >>seconds: rate >>= 103.89814621 Mbps >># piperecv : 512000000 bytes transferred in 38.99614394 >>seconds: rate >>= 105.03602630 Mbps >> >>thier test on their cluster with GigE >> >>poi01% pipesend 500000 1024 | ssh poi02 piperecv >> >> >>>>kaiser@poi02's password: >>>># pipesend : 512000000 bytes transferred in 26.62851000 >>>> >>>> >>seconds: rate = >> >> >>>>153.82009734 Mbps >>>># piperecv : 512000000 bytes transferred in 23.04195297 >>>> >>>> >>seconds: rate = >> >> >>>>177.76270986 Mbps >>>> >>>> >>with rsh instead of ssh >> >> >> >>>>poi01% pipesend 500000 1024 | rsh poi03 piperecv >>>># pipesend : 512000000 bytes transferred in 6.57280993 >>>> >>>> >>seconds: rate = >> >> >>>>623.17335215 Mbps >>>># piperecv : 512000000 bytes transferred in 6.49636197 >>>> >>>> >>seconds: rate = >> >> >>>>630.50673875 Mbps >>>> >>>> >>Using rsh instead of ssh is faster in the last 2 cases but >>ssh over GigE >>is still even faster than ssh over GM tcp/ip >> >> >> > > > > |
From: egan at sense.n. (E. Ford) - 2003-07-28 15:35:16
|
Nice if you are running NetWare/Windows. I guess with Linux you are on your own. > -----Original Message----- > From: Slava Kritov [mailto:sc...@ne...] > Sent: Monday, July 28, 2003 1:59 PM > To: xca...@li... > Subject: RE: [xcat-user] 802.1Q > > > I would like to post this link from Intel's site on adapter's > compatibility: > http://www.intel.com/network/connectivity/resources/technologies/advanced_fe atures/server_adapters.htm > > > -----Original Message----- > From: Slava Kritov > Sent: Thursday, July 24, 2003 5:52 PM > To: xca...@li... > Subject: RE: [xcat-user] 802.1Q > > > Egan, > > Catalysts do not support bonding with different sub-channels going to > different switches. Even between themselves. So you will need to use > etherchannel/bonding to a single switch. I agree with Peter's note on > downstream limited to the speed of one interface. That is > done to avoid > out of order packet delivery. > > -----Original Message----- > From: Egan Ford [mailto:eg...@se...] > Sent: Thursday, July 24, 2003 2:21 PM > To: xca...@li... > Subject: RE: [xcat-user] 802.1Q > > > Actually I have done bonding and I am trying to avoid that. I do not > like using two switches. I need to trunk with a single switch. > > > -----Original Message----- > > From: Peter McLachlan [mailto:pmclachl@CA.IBM.COM] > > Sent: Thursday, July 24, 2003 2:49 PM > > To: xca...@li... > > Subject: RE: [xcat-user] 802.1Q > > > > > > Hi Egan, > > > > What you're looking for is the Linux bonding project. > > http://sourceforge.net/projects/bonding/ > > > > The channel bonding code is included with Red Hat kernels at least > > since 7.3 but only newer versions of the code contain the 802.3ad > > dynamic trunking support. If you look in the kernel > > documentation/networking there is some well written > documentation for > > it in bonding.txt and the > > userspace tool is there too ifenslave.c. > > > > You might need to patch the kernel if you need the 802.3ad > support or > > jumbo frames. I only did this on RH7.3 I don't know whether newer > > kernels under RH8 or 9 contain bonding code that supports these > > features. I was > > able to patch the bonding source into the 7.3 2.4.20 Red Hat > > kernel with > > only a little bit of hacking. YMMV with newer kernels. > > > > One of the main hackers on the bonding code is an IBM'er, > you'll see > > his address in the bonding.txt doc. He was quite helpful in > > getting me access > > to beta code I needed for features I needed for my engagement. > > > > Regards, > > > > > > Peter McLachlan > > I/T Specialist - Redhat Certified Engineer (RHCE) > > Linux Solutions > > IBM Global Services > > Office: (905) 316-8646 > > e-mail: pmc...@ca... > > > > > > > > > > "Egan Ford" <eg...@se...> > > 07/24/2003 04:36 PM > > Please respond to xcat-user > > > > To: <xca...@li...> > > cc: > > Subject: RE: [xcat-user] 802.1Q > > > > > > > > Yes, sorry for the confusion. 802.3ad is what I need, not 802.1Q. > > > > Can you send me more details on the code needed for this? > > > > Thanks. > > > > > -----Original Message----- > > > From: Peter McLachlan [mailto:pmc...@ca...] > > > Sent: Thursday, July 24, 2003 1:10 PM > > > To: xca...@li... > > > Subject: Re: [xcat-user] 802.1Q > > > > > > > > > Hi Egan, from follow on messages I assume you are talking about > > > etherchannel/"link aggregation"/802.3ad. I have > successfully setup > > > link aggregation (with jumbo MTU as well) between x345's running > > > RH Linux 7.3 > > > with a customized kernel and Foundry FastIron switches. > > > > > > Aggregate bandwidth was close to the theoretical limit. > However - on > > > > the foundry switches the link selection algorithm is a hash of > > > src IP address > > > so you will never get better than a gigabit link downwards > > > between any two > > > points, although the roundrobin scheme on the x345 side would > > > allow it to > > > transmit up at nearly full rate. (Confirmed this with a > > udp stream.) > > > > > > The NIC's used were the onboard e1000's. > > > > > > Regards, > > > > > > Peter McLachlan > > > I/T Specialist - Redhat Certified Engineer (RHCE) > > > Linux Solutions > > > IBM Global Services > > > Office: (905) 316-8646 > > > e-mail: pmc...@ca... > > > > > > > > > > > > > > > "Egan Ford" <eg...@se...> > > > 07/23/2003 05:09 PM > > > Please respond to xcat-user > > > > > > To: <xca...@li...> > > > cc: > > > Subject: [xcat-user] 802.1Q > > > > > > > > > > > > Anyone use this to trunk multiple network connections > > > together with Linux? > > > > > > Thanks. > > > > > > > > > > > |
From: dlapine at ncsa.uiuc.e. (D. Lapine) - 2003-07-28 15:08:23
|
Interesting. I';ve had the same experience with the jumbo frames, and I wrote a seperate init script to bring it up. Hmmm, a reboot of all 258 nodes leaves about 53 hanging at my script. Not the most robust thing in the world. On Mon, 28 Jul 2003, Michael Packard wrote: > I asked Kevin @ Beaverton why the MTU= option didn't work for syskonnect in the /etc/sysconfig script; here was his response: > > Mike, > > This is indeed a quirk of the SysKonnect driver. They have chosen > to not support configuring the MTU unless the interface is up and > running (there is a specific check in their MTU change code for the > boardlevel being at SK_INIT_RUN). They have documented the ifconfig > approach as the appropriate method in their sk98lin.txt file. > > Calling ifconfig in a custom startup script is probably the easiest > approach. You could also modify the /sbin/ifup script to break the "ip > link" line into two different calls (one to bring the interface up, and > one to change the mtu), but this would be a little more complex. I will > go ahead and file an enhancement request with SuSE to do just that for > future releases (to make the ifup script a little more robust for > different driver behavior/requirements). > > Let me know if you have any further questions about this. > > Thanks, > -Kevin > -- --- Daniel LaPine, System Engineer National Center for Supercomputing Applications (NCSA) email: dl...@nc... phone: 217-244-9294 |
From: scorp at netli.c. (S. Kritov) - 2003-07-28 13:59:23
|
I would like to post this link from Intel's site on adapter's compatibility: http://www.intel.com/network/connectivity/resources/technologies/advance d_features/server_adapters.htm -----Original Message----- From: Slava Kritov Sent: Thursday, July 24, 2003 5:52 PM To: xca...@li... Subject: RE: [xcat-user] 802.1Q Egan, Catalysts do not support bonding with different sub-channels going to different switches. Even between themselves. So you will need to use etherchannel/bonding to a single switch. I agree with Peter's note on downstream limited to the speed of one interface. That is done to avoid out of order packet delivery. -----Original Message----- From: Egan Ford [mailto:eg...@se...] Sent: Thursday, July 24, 2003 2:21 PM To: xca...@li... Subject: RE: [xcat-user] 802.1Q Actually I have done bonding and I am trying to avoid that. I do not like using two switches. I need to trunk with a single switch. > -----Original Message----- > From: Peter McLachlan [mailto:pmclachl@CA.IBM.COM] > Sent: Thursday, July 24, 2003 2:49 PM > To: xca...@li... > Subject: RE: [xcat-user] 802.1Q > > > Hi Egan, > > What you're looking for is the Linux bonding project. > http://sourceforge.net/projects/bonding/ > > The channel bonding code is included with Red Hat kernels at least > since 7.3 but only newer versions of the code contain the 802.3ad > dynamic trunking support. If you look in the kernel > documentation/networking there is some well written documentation for > it in bonding.txt and the > userspace tool is there too ifenslave.c. > > You might need to patch the kernel if you need the 802.3ad support or > jumbo frames. I only did this on RH7.3 I don't know whether newer > kernels under RH8 or 9 contain bonding code that supports these > features. I was > able to patch the bonding source into the 7.3 2.4.20 Red Hat > kernel with > only a little bit of hacking. YMMV with newer kernels. > > One of the main hackers on the bonding code is an IBM'er, you'll see > his address in the bonding.txt doc. He was quite helpful in > getting me access > to beta code I needed for features I needed for my engagement. > > Regards, > > > Peter McLachlan > I/T Specialist - Redhat Certified Engineer (RHCE) > Linux Solutions > IBM Global Services > Office: (905) 316-8646 > e-mail: pmc...@ca... > > > > > "Egan Ford" <eg...@se...> > 07/24/2003 04:36 PM > Please respond to xcat-user > > To: <xca...@li...> > cc: > Subject: RE: [xcat-user] 802.1Q > > > > Yes, sorry for the confusion. 802.3ad is what I need, not 802.1Q. > > Can you send me more details on the code needed for this? > > Thanks. > > > -----Original Message----- > > From: Peter McLachlan [mailto:pmc...@ca...] > > Sent: Thursday, July 24, 2003 1:10 PM > > To: xca...@li... > > Subject: Re: [xcat-user] 802.1Q > > > > > > Hi Egan, from follow on messages I assume you are talking about > > etherchannel/"link aggregation"/802.3ad. I have successfully setup > > link aggregation (with jumbo MTU as well) between x345's running > > RH Linux 7.3 > > with a customized kernel and Foundry FastIron switches. > > > > Aggregate bandwidth was close to the theoretical limit. However - on > > the foundry switches the link selection algorithm is a hash of > > src IP address > > so you will never get better than a gigabit link downwards > > between any two > > points, although the roundrobin scheme on the x345 side would > > allow it to > > transmit up at nearly full rate. (Confirmed this with a > udp stream.) > > > > The NIC's used were the onboard e1000's. > > > > Regards, > > > > Peter McLachlan > > I/T Specialist - Redhat Certified Engineer (RHCE) > > Linux Solutions > > IBM Global Services > > Office: (905) 316-8646 > > e-mail: pmc...@ca... > > > > > > > > > > "Egan Ford" <eg...@se...> > > 07/23/2003 05:09 PM > > Please respond to xcat-user > > > > To: <xca...@li...> > > cc: > > Subject: [xcat-user] 802.1Q > > > > > > > > Anyone use this to trunk multiple network connections > > together with Linux? > > > > Thanks. > > > > > > |
From: mpackard at sdsc.e. (M. Packard) - 2003-07-28 11:50:49
|
I asked Kevin @ Beaverton why the MTU= option didn't work for syskonnect in the /etc/sysconfig script; here was his response: Mike, This is indeed a quirk of the SysKonnect driver. They have chosen to not support configuring the MTU unless the interface is up and running (there is a specific check in their MTU change code for the boardlevel being at SK_INIT_RUN). They have documented the ifconfig approach as the appropriate method in their sk98lin.txt file. Calling ifconfig in a custom startup script is probably the easiest approach. You could also modify the /sbin/ifup script to break the "ip link" line into two different calls (one to bring the interface up, and one to change the mtu), but this would be a little more complex. I will go ahead and file an enhancement request with SuSE to do just that for future releases (to make the ifup script a little more robust for different driver behavior/requirements). Let me know if you have any further questions about this. Thanks, -Kevin |
From: egan at sense.n. (E. Ford) - 2003-07-27 08:20:01
|
> Any reason not to make a bigger ramdisk? The one I have uses > 48 megs compressed > to 15MB > > the tiger4's need the mpt fusion drivers to make them work. There was a bug in elilo that prevented large ramdisks from loading. The bug has been fixed and xCAT includes the latest version. I my list of to dos is a completely new netboot image for ia64 similar to that used for x86/x86_64. I have do to this anyway for MAC address collection on the new syskonnects. |
From: dlapine at ncsa.uiuc.e. (D. Lapine) - 2003-07-26 18:13:40
|
On Sat, 26 Jul 2003, Egan Ford wrote: > Use pcons when eth0/eth1 is down, e.g. boot using your new shell setup, then > pcons will be able to run parallel commands to recover. The version of > pcons you have just needed to see a " # ". Make that PS1 on your shell > image. Any reason not to make a bigger ramdisk? The one I have uses 48 megs compressed to 15MB the tiger4's need the mpt fusion drivers to make them work. > > The shell image has sed on it, use that to add a "1" to the append line in > elilo.conf so that you can boot to a prompt to disable nfs. I took the scsi > .o files of the shell image to save space, but left vfat for the /boot on. > I do not have reiser on it. Use pcons to mount /boot and edit it. > > I have a patch to enable 32K nfsd if you want it for SuSE SLES8 ia64. > Default is 8K. The client already supports 32K. have to see. polyserve is slowing down the nfs performance, I think. > > I suppose I should write a pconsrcp using rz/sz or uuencode/cat. > > > -----Original Message----- > > From: Dan Lapine [mailto:dl...@nc...] > > Sent: Saturday, July 26, 2003 4:26 PM > > To: xca...@li... > > Subject: RE: [xcat-user] support for install over gige/jumbo frames? > > > > > > On Sat, 26 Jul 2003, Egan Ford wrote: > > > > > psh noderange init 1 will put all the machines in single user. > > assuming that the nodes are in a state where psh works :) > > > > If, for instance, the nodes are in a state where the reboot > > leaves them > > hanging on the attempt to mount nfs from servers which are using > > Autonegotiation, jumbo frames, version 6.x syskonnect > > drivers, and the nodes > > themselves are using Autoneg off, standard frames, v4 > > syskonnect drivers. > > This would happen in the days following a change to the fstab > > mount options > > to hard mounts. The nodes would then automatically go into > > "doorstop" mode. > > > > Lessons learned: > > 1) "psh all chkconfig nfs off". Really. This saves your whole > > weekend if messing > > with a Force 10 switch. > > 2) Have access to said Force 10 switch so as to be able to > > revert if necessary > > 3) Have Egan's new, super, "nodeset <nodes> init1" option :) > > 4) Have Egan's/Dan's updated "nodeset <nodes> shell" such > > that it not only goes into > > rescue mode, but that rescue mode let's you access the > > files on the disk. > > Maybe even with vi. :) > > 5) Don't make the change to jumbo frames on SuSE SLES 8 on > > friday afternoon if your > > time means anything > > > > 6) to enable really, really fast nfs service on SuSE and a Force 10 > > a) go to jumbo frames > > b) user 6.14+ version of driver > > c) set "Autonegotiate on" on both the force 10 and the > > syskonnects > > d) mount the filesystems with rsize=16384,wsize=16384 > > e) rebuild the nfsd.o module using 16384 as MAX > > "To change the NFS server's blocksize, you will need to modify > > /usr/src/linux/include/linux/nfsd/const.h to set > > NFSSVC_MAXBLKSIZE > > to 16384, and then rebuild the nfsd module." > > f) stay on local disk > > > > > > > > You can then use the pcons command to perform parallel > > operations to all the > > > nodes. pcons works like psh but uses the serial interface. > > The serial > > > interface should be available from init 1. > > > > > > All 1.2.0 versions of xCAT with pcons only works if you have no root > > > password. I have a newer version that works with root > > passwords to be > > > released with 1.2.0-pre1a. If you need them I can send you > > the updated > > > files. > > I'd need the new version, but not now, am almost finished. > > > > > > > -----Original Message----- > > > > From: Dan Lapine [mailto:dl...@nc...] > > > > Sent: Saturday, July 26, 2003 3:16 PM > > > > To: xca...@li... > > > > Cc: eg...@se... > > > > Subject: Re: [xcat-user] support for install over > > gige/jumbo frames? > > > > > > > > > > > > Well, it'll get tested on the dtf... :) > > > > > > > > We'll need jumbos for the fiber gige rather than the > > > > onboard. a call to ifconfig (ifconfig eth1 mtu 9000) > > > > works, but only if the card is already up. > > > > > > > > Actually MTU=9000 fails for the syskonnect cards if you > > > > enable it in the /etc/sysconfig scripts. It seems that > > > > the driver only wants to change mtu's if the driver is loaded. > > > > > > > > I've had to write a /etc/init.d script to enable the jumbos > > > > after the card is up but before the nfs trys to mount. > > > > > > > > Say, while we're at it, any way in xcat to the node to reboot > > > > in single user mode through xCAT? Spending my Saturday afternoon > > > > at work doing this manually for 250 + nodes > > > > > > > > On Sat, 26 Jul 2003, Anas Nashif wrote: > > > > > > > > > > > > > > > > > > > Egan Ford wrote: > > > > > > Many I am sure, but you will have to ask the author > > of autoyast. > > > > > > > > > > > If you have SP2, you can boot using the kernel and initrd > > > > from there.. > > > > > Or you can keep old kernel and just update the sk98 driver. > > > > > > > > > > To enable mtu=9000 you need somehow to take eth0 down and > > > > bring it up > > > > > again with mtu=9000 using a script in a very early stage.. > > > > > This can be done using a pre script in the Update Media > > > > > (http://www.suse.de/~nashif/autoinstall/y2update/html/). > > > > Basically write > > > > > an update.pre to do that with dummy update drivers and in the > > > > > update.post you can still add MTU=9000 to > > > > > /etc/sysconfig/network/ifcfg-eth0!! > > > > > > > > > > This needs some testing though :-) > > > > > > > > > > > > > > > Anas > > > > > > > > > > > > > > > > >>-----Original Message----- > > > > > >>From: Dan Lapine [mailto:dl...@nc...] > > > > > >>Sent: Friday, July 25, 2003 10:19 PM > > > > > >>To: xca...@li... > > > > > >>Subject: RE: [xcat-user] support for install over > > > > gige/jumbo frames? > > > > > >> > > > > > >> > > > > > >>any reason not to replace autoyast kernel and then update the > > > > > >>install to include a > > > > > >>"ifconfig eth1 mtu 9000" > > > > > >> > > > > > >>somewhere in the install sequence? > > > > > >> > > > > > >>On Thu, 24 Jul 2003, Egan Ford wrote: > > > > > >> > > > > > >> > > > > > >>>The author of autoyast said to my request for 9000 MTU, > > > > > >> > > > > > >>"unfortunately no". > > > > > >> > > > > > >>>Beaverton has already made a few attempts to get it working > > > > > >> > > > > > >>with no luck. > > > > > >> > > > > > >>>My suggestions are: > > > > > >>> > > > > > >>>A. Use the management network that is at 1500. Since the > > > > > >> > > > > > >>idea is to > > > > > >> > > > > > >>>install infrequently it shouldn't be a problem. For large > > > > > >> > > > > > >>rollouts set the > > > > > >> > > > > > >>>management nodes eth1 to 1500 then change back after the > > > > > >> > > > > > >>install. Your > > > > > >> > > > > > >>>stage nodes will need to stay at 1500 until all nodes have > > > > > >> > > > > > >>been installed. > > > > > >> > > > > > >>>B. Have a dedicated NFS server for installs only at 1500. > > > > > >> > > > > > >>Your stage nodes > > > > > >> > > > > > >>>will need to stay at 1500 until all nodes have been > > > > > >> > > > > > >>installed. Then you can > > > > > >> > > > > > >>>change all eth1 to 9000. > > > > > >>> > > > > > >>>C. Use the forthcoming system imager multicast installer > > > > > >> > > > > > >>on the management > > > > > >> > > > > > >>>network. > > > > > >>> > > > > > >>>I'd do A then switch to C. Hopefully C will be ready > > > > for phase 2. > > > > > >>> > > > > > >>> > > > > > >>>>-----Original Message----- > > > > > >>>>From: Dan Lapine [mailto:dl...@nc...] > > > > > >>>>Sent: Thursday, July 24, 2003 3:31 PM > > > > > >>>>To: xca...@li... > > > > > >>>>Subject: [xcat-user] support for install over > > gige/jumbo frames? > > > > > >>>> > > > > > >>>> > > > > > >>>>Looking to move a teragrid system to jumbo frames on > > > > > >>>>the syskonnect driver. > > > > > >>>> > > > > > >>>>Any way to enable jumbo frames during the SuSE > > SLES8 install? > > > > > >>>>I know that we'll need to update the install kernel. > > > > > >>>> > > > > > >>>>-- > > > > > >>>>--- > > > > > >>>>Daniel LaPine, System Engineer > > > > > >>>>National Center for Supercomputing Applications (NCSA) > > > > > >>>>email: dl...@nc... > > > > > >>>>phone: 217-244-9294 > > > > > >>>> > > > > > >>> > > > > > >>-- > > > > > >>--- > > > > > >>Daniel LaPine, System Engineer > > > > > >>National Center for Supercomputing Applications (NCSA) > > > > > >>email: dl...@nc... > > > > > >>phone: 217-244-9294 > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > --- > > > > Daniel LaPine, System Engineer > > > > National Center for Supercomputing Applications (NCSA) > > > > email: dl...@nc... > > > > phone: 217-244-9294 > > > > > > > > > > > -- > > --- > > Daniel LaPine, System Engineer > > National Center for Supercomputing Applications (NCSA) > > email: dl...@nc... > > phone: 217-244-9294 > > > > > -- --- Daniel LaPine, System Engineer National Center for Supercomputing Applications (NCSA) email: dl...@nc... phone: 217-244-9294 |
From: egan at sense.n. (E. Ford) - 2003-07-26 17:16:03
|
Use pcons when eth0/eth1 is down, e.g. boot using your new shell setup, then pcons will be able to run parallel commands to recover. The version of pcons you have just needed to see a " # ". Make that PS1 on your shell image. The shell image has sed on it, use that to add a "1" to the append line in elilo.conf so that you can boot to a prompt to disable nfs. I took the scsi .o files of the shell image to save space, but left vfat for the /boot on. I do not have reiser on it. Use pcons to mount /boot and edit it. I have a patch to enable 32K nfsd if you want it for SuSE SLES8 ia64. Default is 8K. The client already supports 32K. I suppose I should write a pconsrcp using rz/sz or uuencode/cat. > -----Original Message----- > From: Dan Lapine [mailto:dl...@nc...] > Sent: Saturday, July 26, 2003 4:26 PM > To: xca...@li... > Subject: RE: [xcat-user] support for install over gige/jumbo frames? > > > On Sat, 26 Jul 2003, Egan Ford wrote: > > > psh noderange init 1 will put all the machines in single user. > assuming that the nodes are in a state where psh works :) > > If, for instance, the nodes are in a state where the reboot > leaves them > hanging on the attempt to mount nfs from servers which are using > Autonegotiation, jumbo frames, version 6.x syskonnect > drivers, and the nodes > themselves are using Autoneg off, standard frames, v4 > syskonnect drivers. > This would happen in the days following a change to the fstab > mount options > to hard mounts. The nodes would then automatically go into > "doorstop" mode. > > Lessons learned: > 1) "psh all chkconfig nfs off". Really. This saves your whole > weekend if messing > with a Force 10 switch. > 2) Have access to said Force 10 switch so as to be able to > revert if necessary > 3) Have Egan's new, super, "nodeset <nodes> init1" option :) > 4) Have Egan's/Dan's updated "nodeset <nodes> shell" such > that it not only goes into > rescue mode, but that rescue mode let's you access the > files on the disk. > Maybe even with vi. :) > 5) Don't make the change to jumbo frames on SuSE SLES 8 on > friday afternoon if your > time means anything > > 6) to enable really, really fast nfs service on SuSE and a Force 10 > a) go to jumbo frames > b) user 6.14+ version of driver > c) set "Autonegotiate on" on both the force 10 and the > syskonnects > d) mount the filesystems with rsize=16384,wsize=16384 > e) rebuild the nfsd.o module using 16384 as MAX > "To change the NFS server's blocksize, you will need to modify > /usr/src/linux/include/linux/nfsd/const.h to set > NFSSVC_MAXBLKSIZE > to 16384, and then rebuild the nfsd module." > f) stay on local disk > > > > > You can then use the pcons command to perform parallel > operations to all the > > nodes. pcons works like psh but uses the serial interface. > The serial > > interface should be available from init 1. > > > > All 1.2.0 versions of xCAT with pcons only works if you have no root > > password. I have a newer version that works with root > passwords to be > > released with 1.2.0-pre1a. If you need them I can send you > the updated > > files. > I'd need the new version, but not now, am almost finished. > > > > > -----Original Message----- > > > From: Dan Lapine [mailto:dl...@nc...] > > > Sent: Saturday, July 26, 2003 3:16 PM > > > To: xca...@li... > > > Cc: eg...@se... > > > Subject: Re: [xcat-user] support for install over > gige/jumbo frames? > > > > > > > > > Well, it'll get tested on the dtf... :) > > > > > > We'll need jumbos for the fiber gige rather than the > > > onboard. a call to ifconfig (ifconfig eth1 mtu 9000) > > > works, but only if the card is already up. > > > > > > Actually MTU=9000 fails for the syskonnect cards if you > > > enable it in the /etc/sysconfig scripts. It seems that > > > the driver only wants to change mtu's if the driver is loaded. > > > > > > I've had to write a /etc/init.d script to enable the jumbos > > > after the card is up but before the nfs trys to mount. > > > > > > Say, while we're at it, any way in xcat to the node to reboot > > > in single user mode through xCAT? Spending my Saturday afternoon > > > at work doing this manually for 250 + nodes > > > > > > On Sat, 26 Jul 2003, Anas Nashif wrote: > > > > > > > > > > > > > > > Egan Ford wrote: > > > > > Many I am sure, but you will have to ask the author > of autoyast. > > > > > > > > > If you have SP2, you can boot using the kernel and initrd > > > from there.. > > > > Or you can keep old kernel and just update the sk98 driver. > > > > > > > > To enable mtu=9000 you need somehow to take eth0 down and > > > bring it up > > > > again with mtu=9000 using a script in a very early stage.. > > > > This can be done using a pre script in the Update Media > > > > (http://www.suse.de/~nashif/autoinstall/y2update/html/). > > > Basically write > > > > an update.pre to do that with dummy update drivers and in the > > > > update.post you can still add MTU=9000 to > > > > /etc/sysconfig/network/ifcfg-eth0!! > > > > > > > > This needs some testing though :-) > > > > > > > > > > > > Anas > > > > > > > > > > > > > >>-----Original Message----- > > > > >>From: Dan Lapine [mailto:dl...@nc...] > > > > >>Sent: Friday, July 25, 2003 10:19 PM > > > > >>To: xca...@li... > > > > >>Subject: RE: [xcat-user] support for install over > > > gige/jumbo frames? > > > > >> > > > > >> > > > > >>any reason not to replace autoyast kernel and then update the > > > > >>install to include a > > > > >>"ifconfig eth1 mtu 9000" > > > > >> > > > > >>somewhere in the install sequence? > > > > >> > > > > >>On Thu, 24 Jul 2003, Egan Ford wrote: > > > > >> > > > > >> > > > > >>>The author of autoyast said to my request for 9000 MTU, > > > > >> > > > > >>"unfortunately no". > > > > >> > > > > >>>Beaverton has already made a few attempts to get it working > > > > >> > > > > >>with no luck. > > > > >> > > > > >>>My suggestions are: > > > > >>> > > > > >>>A. Use the management network that is at 1500. Since the > > > > >> > > > > >>idea is to > > > > >> > > > > >>>install infrequently it shouldn't be a problem. For large > > > > >> > > > > >>rollouts set the > > > > >> > > > > >>>management nodes eth1 to 1500 then change back after the > > > > >> > > > > >>install. Your > > > > >> > > > > >>>stage nodes will need to stay at 1500 until all nodes have > > > > >> > > > > >>been installed. > > > > >> > > > > >>>B. Have a dedicated NFS server for installs only at 1500. > > > > >> > > > > >>Your stage nodes > > > > >> > > > > >>>will need to stay at 1500 until all nodes have been > > > > >> > > > > >>installed. Then you can > > > > >> > > > > >>>change all eth1 to 9000. > > > > >>> > > > > >>>C. Use the forthcoming system imager multicast installer > > > > >> > > > > >>on the management > > > > >> > > > > >>>network. > > > > >>> > > > > >>>I'd do A then switch to C. Hopefully C will be ready > > > for phase 2. > > > > >>> > > > > >>> > > > > >>>>-----Original Message----- > > > > >>>>From: Dan Lapine [mailto:dl...@nc...] > > > > >>>>Sent: Thursday, July 24, 2003 3:31 PM > > > > >>>>To: xca...@li... > > > > >>>>Subject: [xcat-user] support for install over > gige/jumbo frames? > > > > >>>> > > > > >>>> > > > > >>>>Looking to move a teragrid system to jumbo frames on > > > > >>>>the syskonnect driver. > > > > >>>> > > > > >>>>Any way to enable jumbo frames during the SuSE > SLES8 install? > > > > >>>>I know that we'll need to update the install kernel. > > > > >>>> > > > > >>>>-- > > > > >>>>--- > > > > >>>>Daniel LaPine, System Engineer > > > > >>>>National Center for Supercomputing Applications (NCSA) > > > > >>>>email: dl...@nc... > > > > >>>>phone: 217-244-9294 > > > > >>>> > > > > >>> > > > > >>-- > > > > >>--- > > > > >>Daniel LaPine, System Engineer > > > > >>National Center for Supercomputing Applications (NCSA) > > > > >>email: dl...@nc... > > > > >>phone: 217-244-9294 > > > > >> > > > > > > > > > > > > > > > > > > > > -- > > > --- > > > Daniel LaPine, System Engineer > > > National Center for Supercomputing Applications (NCSA) > > > email: dl...@nc... > > > phone: 217-244-9294 > > > > > > > -- > --- > Daniel LaPine, System Engineer > National Center for Supercomputing Applications (NCSA) > email: dl...@nc... > phone: 217-244-9294 > > |
From: dlapine at ncsa.uiuc.e. (D. Lapine) - 2003-07-26 16:25:54
|
On Sat, 26 Jul 2003, Egan Ford wrote: > psh noderange init 1 will put all the machines in single user. assuming that the nodes are in a state where psh works :) If, for instance, the nodes are in a state where the reboot leaves them hanging on the attempt to mount nfs from servers which are using Autonegotiation, jumbo frames, version 6.x syskonnect drivers, and the nodes themselves are using Autoneg off, standard frames, v4 syskonnect drivers. This would happen in the days following a change to the fstab mount options to hard mounts. The nodes would then automatically go into "doorstop" mode. Lessons learned: 1) "psh all chkconfig nfs off". Really. This saves your whole weekend if messing with a Force 10 switch. 2) Have access to said Force 10 switch so as to be able to revert if necessary 3) Have Egan's new, super, "nodeset <nodes> init1" option :) 4) Have Egan's/Dan's updated "nodeset <nodes> shell" such that it not only goes into rescue mode, but that rescue mode let's you access the files on the disk. Maybe even with vi. :) 5) Don't make the change to jumbo frames on SuSE SLES 8 on friday afternoon if your time means anything 6) to enable really, really fast nfs service on SuSE and a Force 10 a) go to jumbo frames b) user 6.14+ version of driver c) set "Autonegotiate on" on both the force 10 and the syskonnects d) mount the filesystems with rsize=16384,wsize=16384 e) rebuild the nfsd.o module using 16384 as MAX "To change the NFS server's blocksize, you will need to modify /usr/src/linux/include/linux/nfsd/const.h to set NFSSVC_MAXBLKSIZE to 16384, and then rebuild the nfsd module." f) stay on local disk > > You can then use the pcons command to perform parallel operations to all the > nodes. pcons works like psh but uses the serial interface. The serial > interface should be available from init 1. > > All 1.2.0 versions of xCAT with pcons only works if you have no root > password. I have a newer version that works with root passwords to be > released with 1.2.0-pre1a. If you need them I can send you the updated > files. I'd need the new version, but not now, am almost finished. > > > -----Original Message----- > > From: Dan Lapine [mailto:dl...@nc...] > > Sent: Saturday, July 26, 2003 3:16 PM > > To: xca...@li... > > Cc: eg...@se... > > Subject: Re: [xcat-user] support for install over gige/jumbo frames? > > > > > > Well, it'll get tested on the dtf... :) > > > > We'll need jumbos for the fiber gige rather than the > > onboard. a call to ifconfig (ifconfig eth1 mtu 9000) > > works, but only if the card is already up. > > > > Actually MTU=9000 fails for the syskonnect cards if you > > enable it in the /etc/sysconfig scripts. It seems that > > the driver only wants to change mtu's if the driver is loaded. > > > > I've had to write a /etc/init.d script to enable the jumbos > > after the card is up but before the nfs trys to mount. > > > > Say, while we're at it, any way in xcat to the node to reboot > > in single user mode through xCAT? Spending my Saturday afternoon > > at work doing this manually for 250 + nodes > > > > On Sat, 26 Jul 2003, Anas Nashif wrote: > > > > > > > > > > > Egan Ford wrote: > > > > Many I am sure, but you will have to ask the author of autoyast. > > > > > > > If you have SP2, you can boot using the kernel and initrd > > from there.. > > > Or you can keep old kernel and just update the sk98 driver. > > > > > > To enable mtu=9000 you need somehow to take eth0 down and > > bring it up > > > again with mtu=9000 using a script in a very early stage.. > > > This can be done using a pre script in the Update Media > > > (http://www.suse.de/~nashif/autoinstall/y2update/html/). > > Basically write > > > an update.pre to do that with dummy update drivers and in the > > > update.post you can still add MTU=9000 to > > > /etc/sysconfig/network/ifcfg-eth0!! > > > > > > This needs some testing though :-) > > > > > > > > > Anas > > > > > > > > > > >>-----Original Message----- > > > >>From: Dan Lapine [mailto:dl...@nc...] > > > >>Sent: Friday, July 25, 2003 10:19 PM > > > >>To: xca...@li... > > > >>Subject: RE: [xcat-user] support for install over > > gige/jumbo frames? > > > >> > > > >> > > > >>any reason not to replace autoyast kernel and then update the > > > >>install to include a > > > >>"ifconfig eth1 mtu 9000" > > > >> > > > >>somewhere in the install sequence? > > > >> > > > >>On Thu, 24 Jul 2003, Egan Ford wrote: > > > >> > > > >> > > > >>>The author of autoyast said to my request for 9000 MTU, > > > >> > > > >>"unfortunately no". > > > >> > > > >>>Beaverton has already made a few attempts to get it working > > > >> > > > >>with no luck. > > > >> > > > >>>My suggestions are: > > > >>> > > > >>>A. Use the management network that is at 1500. Since the > > > >> > > > >>idea is to > > > >> > > > >>>install infrequently it shouldn't be a problem. For large > > > >> > > > >>rollouts set the > > > >> > > > >>>management nodes eth1 to 1500 then change back after the > > > >> > > > >>install. Your > > > >> > > > >>>stage nodes will need to stay at 1500 until all nodes have > > > >> > > > >>been installed. > > > >> > > > >>>B. Have a dedicated NFS server for installs only at 1500. > > > >> > > > >>Your stage nodes > > > >> > > > >>>will need to stay at 1500 until all nodes have been > > > >> > > > >>installed. Then you can > > > >> > > > >>>change all eth1 to 9000. > > > >>> > > > >>>C. Use the forthcoming system imager multicast installer > > > >> > > > >>on the management > > > >> > > > >>>network. > > > >>> > > > >>>I'd do A then switch to C. Hopefully C will be ready > > for phase 2. > > > >>> > > > >>> > > > >>>>-----Original Message----- > > > >>>>From: Dan Lapine [mailto:dl...@nc...] > > > >>>>Sent: Thursday, July 24, 2003 3:31 PM > > > >>>>To: xca...@li... > > > >>>>Subject: [xcat-user] support for install over gige/jumbo frames? > > > >>>> > > > >>>> > > > >>>>Looking to move a teragrid system to jumbo frames on > > > >>>>the syskonnect driver. > > > >>>> > > > >>>>Any way to enable jumbo frames during the SuSE SLES8 install? > > > >>>>I know that we'll need to update the install kernel. > > > >>>> > > > >>>>-- > > > >>>>--- > > > >>>>Daniel LaPine, System Engineer > > > >>>>National Center for Supercomputing Applications (NCSA) > > > >>>>email: dl...@nc... > > > >>>>phone: 217-244-9294 > > > >>>> > > > >>> > > > >>-- > > > >>--- > > > >>Daniel LaPine, System Engineer > > > >>National Center for Supercomputing Applications (NCSA) > > > >>email: dl...@nc... > > > >>phone: 217-244-9294 > > > >> > > > > > > > > > > > > > > > -- > > --- > > Daniel LaPine, System Engineer > > National Center for Supercomputing Applications (NCSA) > > email: dl...@nc... > > phone: 217-244-9294 > > > -- --- Daniel LaPine, System Engineer National Center for Supercomputing Applications (NCSA) email: dl...@nc... phone: 217-244-9294 |