Is there a way to enable jumbo frame support (9000 MTU) for using clonezilla server edition? Here is what I am doing, I have a cart for each of my sites that has a Gig ethernet switch, and all my machines are gig ethernet. When I clone them we take them off of the production network and wire it up to the cart and send out the image. We are attempting to get as much performance out of this as possible, so we need to know if it is possible with some tweaks on our end to make the follow:
Server 9000 MTU on the nic
and through the client boot on PXE to set the nic to 9000 MTU? The switches are Cisco, so we can set them to support jumbo frames. I am continuing to do research but a bit of help would go a long way. Thanks!
-Will Farmer
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Actually, I think I have the process down. I'll do a bit of a writeup when I finish testing it in my test environment on what I had to do to make it work, if it works any better. I'm currently doing it through Ubuntu 12.04.
So far, I can set it on the server using that command and in the dhcpd.conf I can set option 26 up (option interface-mtu 9000) so that the end machines will be 9000 end to end. I'll be testing it tomorrow for thoroughness. I might even throw a machine with wireshark to monitor.
My end goal, truly, is to take a process that was taking people 2 weeks with lots of overtime down to a week with none. Winroll is in my sights for the next piece of the project.
-Will
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Okay, so here is the update:
I have the MTU set up on the server Ethernet at 4078 due to the max supported showing as that using the standard e1000e driver. I have set the /etc/dhcp/dhcpd.conf with option interface-mtu 4078 and still the clients are not getting the 4078. I have also set the switch level mtu to be a global 4078.
When the DHCP ACK packet comes through, it doesn't have the MTU option showing, but it shows all of the other options. I am running out of ideas here. Any ideas as to why the udhcpd client that is in use in the clonezilla software doesn't see it or request it? I'll post some configs for evaluation in a few minutes. Thanks!
-Will
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Okay, I am out of ideas. How can I hardcode an MTU in to the client so that it will always use a larger MTU size? I am literally pulling my hair out right now trying to figure it out and my mind is burning. I've given it my everything but it seems to be that either the clonezilla client ignores the setting or never sees it. I've been reading manual after manual trying to figure it out and I'm thinking I might be even more lost now than I was before. Help?
-Will
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Since you mentioned you are running /etc/rc.local, you can add your command in that file. Remember to put before the line "exit 0".
If you are using SSL mode of DRBL, remember to regenerate the template tarball by runnign "/opt/drbl/sbin/dcs", chooing "all", then "more", then "gen-template-files".
Steven.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
so I could put the ifconfig eth0 mtu 4078 in to the rc.local script? I just tried it and it didn't seem to actually do the setting either. I even restarted the clonezilla server to make sure that it would take all new settings….
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Make sure your Linux distribution does run /etc/rc.local. Sometimes maybe it's different file.
So you are running Ubuntu 12.04? Which mode of DRBL did you use? Full DRBL or DRBL SSI?
If you are using SSL mode of DRBL, remember to regenerate the template tarball by runnign "/opt/drbl/sbin/dcs", chooing "all", then "more", then "gen-template-files".
Steven.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i'd like to know how to set this for CZ-libe or PXE boot, i also need to clone with maximum performance and i'd like a way to set the MTU to 9000, maybe a syslinux live-config command on boot?, or a ocs-pre-run?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Just tested it with ocs_prerun="sudo ifconfig eth0 MTU 9014"(to match the NIC frame) and confirming with a pakcet capture tool and it's working perfectly, but i didn't found any improvement in speed compared to normal MTU with only one client computer connected
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
One of the things I noticed was that quite a few of the NICS we have didn't actually support true Jumbo Packet on the desktops. Most of them were in the realm of 4000 or so MTU. But my question is, how large is the image you are using and what compression etc?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
the image is ~5GB, compressed with pigz
there's not problem if there's 4K support, that's jumbo already, it will negotiate the MTU accordingly.
in my case all the chain supports 9K jumbo frames
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
My primary reason was because our images are upwards of ~30-40 GB per PC, so *ANY* gain I can have is essential. I have been off of this for a while, but will be returning to it on the 2nd of January.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is that with bzip2?
My images might be small but i'm rolling out a huge nomber so each second won makes a huge difference.
my current setup for MAX performance is as follows:
image server: win2008r2 with a 5.5GB RAMDRIVE w/image and bootfile
Broadcom dual port GbE NIC in LACP team, w/jumbo frames
PC boot from PXE completely unattended to restore image to HDD and reboot
Using latest stable x64 2.0.1-15-amd64 (i've yet to see if there's difference with ubuntu image)
On test environment(without teamed NICs, using onboard realtek gbe with a straight cable to PC):
From the moment i press the "boot from NIC" in the boot selection menu to reboot, i clock exactly 6 minutes, main partition restores ~8GB/min (but after it transfered the entire image partimg stays processing something using 100% of CPU -shown in top which means it's not multithreaded- all the while with ZERO hdd activity and zero network activity, that part eats a good min/2min, i'll open a thread about that later).
NIC usage is ~50 to 60% (~ 500/600 mbit/s)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
In your case a ramdrive would require a large server, your best bet would be to use a big NAS like nas4free with ZFS with as much ram as you can as ZFS cache, and as many NICs as you can, with NFS that has less overhead.
Or go with CZ server and use multicast(but results can be iffy depending on switches and drivers) i rather like doing every station manually on their own
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yes, unfortunately, that is with the highest compression, we can get it down to about 15GB. These images are academic in nature and have to have the complete semester course software requirements for 4 different campuses of computers, so they get rather large, admittedly.
As for hardware, I have some decent Core i7 Dell 980 PCs that have been upgraded both in terms of Memory (4GB RAM right now, I could add more if I felt it would be a benefit),
Hard drive is a 128GB SSD on SATA/600
Onboard NIC which does support jumbo frames at the 9000 MTU level.
I am running Debian 6.0.5.
I have a Dell PowerConnect 2224 Switch that is all Gig set with a 9000 MTU and supports up to 23 other devices on the switch.
I have it loaded on a cart (lovingly called Oodako after the Godzilla octopus), (Truly, I'm not sure HOW it got painted pearl lime green with blue flames……)
I also have 24 50 foot cables on reels stored in the cart for quick deployment to the room. We take the room off of the production network and image them, bring them up, and if I can ever figure out how to get Winroll to work properly with our network etc. I will be one step closer to making it as streamlined as possible.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i'm testing with a Powerconnect 2248, problem is that the 2000 series doesn't supports LACP and only static aggregation which is crap(and depends if your onboard NIC support static pre-LACP agg).
You should look at aggregating 2 to 4 links(or even better, go with 10gbe but it requires a PC 55xx series switch at the least and a 10gbe NIC).
The SSD is nice to get high speeds, but it doesn't beats RAM as cache, even a good fast SSD will only get 400/500MB/s with high queue depth, with low queue(like very few PC cloning) you're not going to saturate it(then again, the network will probably be your bottleneck).
Try to put 16GB on a PC with whatever OS you like(i rather use nas4free or opensolaris) with a big ZFS cache and see if it makes a difference
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Is there a way to enable jumbo frame support (9000 MTU) for using clonezilla server edition? Here is what I am doing, I have a cart for each of my sites that has a Gig ethernet switch, and all my machines are gig ethernet. When I clone them we take them off of the production network and wire it up to the cart and send out the image. We are attempting to get as much performance out of this as possible, so we need to know if it is possible with some tweaks on our end to make the follow:
Server 9000 MTU on the nic
and through the client boot on PXE to set the nic to 9000 MTU? The switches are Cisco, so we can set them to support jumbo frames. I am continuing to do research but a bit of help would go a long way. Thanks!
-Will Farmer
So you meant to tune the mtu of ethernet card?
If so, you can run something:
ifconfig eth0 mtu 9000
to change that.
Steven.
Actually, I think I have the process down. I'll do a bit of a writeup when I finish testing it in my test environment on what I had to do to make it work, if it works any better. I'm currently doing it through Ubuntu 12.04.
So far, I can set it on the server using that command and in the dhcpd.conf I can set option 26 up (option interface-mtu 9000) so that the end machines will be 9000 end to end. I'll be testing it tomorrow for thoroughness. I might even throw a machine with wireshark to monitor.
My end goal, truly, is to take a process that was taking people 2 weeks with lots of overtime down to a week with none. Winroll is in my sights for the next piece of the project.
-Will
Thanks. Please keep us posted.
Steven.
Okay, so here is the update:
I have the MTU set up on the server Ethernet at 4078 due to the max supported showing as that using the standard e1000e driver. I have set the /etc/dhcp/dhcpd.conf with option interface-mtu 4078 and still the clients are not getting the 4078. I have also set the switch level mtu to be a global 4078.
When the DHCP ACK packet comes through, it doesn't have the MTU option showing, but it shows all of the other options. I am running out of ideas here. Any ideas as to why the udhcpd client that is in use in the clonezilla software doesn't see it or request it? I'll post some configs for evaluation in a few minutes. Thanks!
-Will
Okay, I am out of ideas. How can I hardcode an MTU in to the client so that it will always use a larger MTU size? I am literally pulling my hair out right now trying to figure it out and my mind is burning. I've given it my everything but it seems to be that either the clonezilla client ignores the setting or never sees it. I've been reading manual after manual trying to figure it out and I'm thinking I might be even more lost now than I was before. Help?
-Will
Since you mentioned you are running /etc/rc.local, you can add your command in that file. Remember to put before the line "exit 0".
If you are using SSL mode of DRBL, remember to regenerate the template tarball by runnign "/opt/drbl/sbin/dcs", chooing "all", then "more", then "gen-template-files".
Steven.
so I could put the ifconfig eth0 mtu 4078 in to the rc.local script? I just tried it and it didn't seem to actually do the setting either. I even restarted the clonezilla server to make sure that it would take all new settings….
Make sure your Linux distribution does run /etc/rc.local. Sometimes maybe it's different file.
So you are running Ubuntu 12.04? Which mode of DRBL did you use? Full DRBL or DRBL SSI?
If you are using SSL mode of DRBL, remember to regenerate the template tarball by runnign "/opt/drbl/sbin/dcs", chooing "all", then "more", then "gen-template-files".
Steven.
I have actually gone to using Debian standard 6.04. I am just using SSI for drbl/clonezilla box mode.
So how's the situation now?
Anything OK?
Steven.
i'd like to know how to set this for CZ-libe or PXE boot, i also need to clone with maximum performance and i'd like a way to set the MTU to 9000, maybe a syslinux live-config command on boot?, or a ocs-pre-run?
If you know how to tune that by a command, yes, ocs_prerun could be used for that.
Steven.
so do i simply add: ocs_prerun="ifconfig eth0 mtu 9000" and it will work?
or does it need a busybox/sudo prefix added?
Just tested it with ocs_prerun="sudo ifconfig eth0 MTU 9014"(to match the NIC frame) and confirming with a pakcet capture tool and it's working perfectly, but i didn't found any improvement in speed compared to normal MTU with only one client computer connected
One of the things I noticed was that quite a few of the NICS we have didn't actually support true Jumbo Packet on the desktops. Most of them were in the realm of 4000 or so MTU. But my question is, how large is the image you are using and what compression etc?
the image is ~5GB, compressed with pigz
there's not problem if there's 4K support, that's jumbo already, it will negotiate the MTU accordingly.
in my case all the chain supports 9K jumbo frames
My primary reason was because our images are upwards of ~30-40 GB per PC, so *ANY* gain I can have is essential. I have been off of this for a while, but will be returning to it on the 2nd of January.
Is that with bzip2?
My images might be small but i'm rolling out a huge nomber so each second won makes a huge difference.
my current setup for MAX performance is as follows:
image server: win2008r2 with a 5.5GB RAMDRIVE w/image and bootfile
Broadcom dual port GbE NIC in LACP team, w/jumbo frames
PC boot from PXE completely unattended to restore image to HDD and reboot
Using latest stable x64 2.0.1-15-amd64 (i've yet to see if there's difference with ubuntu image)
On test environment(without teamed NICs, using onboard realtek gbe with a straight cable to PC):
From the moment i press the "boot from NIC" in the boot selection menu to reboot, i clock exactly 6 minutes, main partition restores ~8GB/min (but after it transfered the entire image partimg stays processing something using 100% of CPU -shown in top which means it's not multithreaded- all the while with ZERO hdd activity and zero network activity, that part eats a good min/2min, i'll open a thread about that later).
NIC usage is ~50 to 60% (~ 500/600 mbit/s)
In your case a ramdrive would require a large server, your best bet would be to use a big NAS like nas4free with ZFS with as much ram as you can as ZFS cache, and as many NICs as you can, with NFS that has less overhead.
Or go with CZ server and use multicast(but results can be iffy depending on switches and drivers) i rather like doing every station manually on their own
Yes, unfortunately, that is with the highest compression, we can get it down to about 15GB. These images are academic in nature and have to have the complete semester course software requirements for 4 different campuses of computers, so they get rather large, admittedly.
As for hardware, I have some decent Core i7 Dell 980 PCs that have been upgraded both in terms of Memory (4GB RAM right now, I could add more if I felt it would be a benefit),
Hard drive is a 128GB SSD on SATA/600
Onboard NIC which does support jumbo frames at the 9000 MTU level.
I am running Debian 6.0.5.
I have a Dell PowerConnect 2224 Switch that is all Gig set with a 9000 MTU and supports up to 23 other devices on the switch.
I have it loaded on a cart (lovingly called Oodako after the Godzilla octopus), (Truly, I'm not sure HOW it got painted pearl lime green with blue flames……)
I also have 24 50 foot cables on reels stored in the cart for quick deployment to the room. We take the room off of the production network and image them, bring them up, and if I can ever figure out how to get Winroll to work properly with our network etc. I will be one step closer to making it as streamlined as possible.
i'm testing with a Powerconnect 2248, problem is that the 2000 series doesn't supports LACP and only static aggregation which is crap(and depends if your onboard NIC support static pre-LACP agg).
You should look at aggregating 2 to 4 links(or even better, go with 10gbe but it requires a PC 55xx series switch at the least and a 10gbe NIC).
The SSD is nice to get high speeds, but it doesn't beats RAM as cache, even a good fast SSD will only get 400/500MB/s with high queue depth, with low queue(like very few PC cloning) you're not going to saturate it(then again, the network will probably be your bottleneck).
Try to put 16GB on a PC with whatever OS you like(i rather use nas4free or opensolaris) with a big ZFS cache and see if it makes a difference