I am experimenting with AoE as an alternative to iSCSI, and I am getting very good results from AoE with Jumbo frames enabled.
I’m using vblade v14, and get block output at 200-250MB/s and block input at 250-300MB/s with MTU9000, which is significantly faster than our results with iSCSI.
I am, however, getting very poor results from AoE for reading/output operations (6-10MBs), when jumbo frames are not enabled.
It feels like the software is ‘waiting’ for something, or some sort of configuration parameter is not in place.
I was wondering if anyone might have encountered this problem already, as I’m sure I should be getting results >100MB/s for these outputs.
I have four software RAID-5 md devices on my storage machine. Each one is a group of 4 physical 500GB disks.
I have run 4 vblade daemons, each one representing a different md device.
I have dedicated a separate gigabit Ethernet link for each vblade export (there are 4 gigabit Ethernet ports on the storage machine).
Netperf shows me that I’m getting 950+Mb/s over each channel.
On the server side, using the aoe-tools that is included in the Linux 6.21 kernel, I have specified an Ethernet link with corresponding mac address on the storage machine.
Therefore the server is getting 4 etherdaemon devices, each one on its own physical gigabit Ethernet connection. There is no channel mixup.
Results using MTU9000 frames are very good.
iSCSI results at MTU1500 are also very good – but AoE reads at MTU1500 are terrible. However the writes at MTU1500 are good – which is strange.
Any information much appreciated,