|
From: Adar D. <ad...@vm...> - 2007-10-26 02:36:09
|
> I finally finished packaging the open-vm-tools for Debian, Great! Is your package targetted at Lenny? Or Sarge? > but I do have=20 > some questions regarding a) VMware ESX (in conjunction with=20 > open-vm-tools) and b) the general rule of thumb regarding packaging. >=20 > 1) Is it really necessary to load the vmxnet driver inside a=20 > VMware ESX > guest ? To answer this let me give you some background on the variety of virtual networking devices that VMware uses. Today we support three virtual nics: morphable vlance, vmxnet, and = e1000. I believe the newest UI creation wizards (in the latest WS/Fusion = releases) will use e1000 for a 64-bit VM and morphable vlance otherwise, though = through the .vmx file you can always change a particular NIC to be any one of = three types. The vmxnet and e1000 virtual nics are fairly straight forward: the = former is a VMware paravirtualized network device (with special vendor/device = IDs), while the latter should behave and look like a real e1000 card. The = morphable vlance is special, though. It appears to be an AMD vlance device that = can be driven by the pcnet32 module found in the Linux kernel. However, the = vmxnet module also claims to drive an AMD vlance, and when the vmxnet module is loaded against a morphable vlance device, the vmxnet module will "morph" = the vlance device into the vmxnet virtual nic device I mentioned earlier, = which should result in higher performance networking. >From a packaging perspective, you can't know ahead of time what virtual hardware will available in the guest. Ideally you could just make the = vmxnet module available to the kernel and let udev or hotplug sort it out. The problem is that because of the morphable vlance, udev should always = prefer the vmxnet module to the pcnet32 module, otherwise the full performance = of a morphable-vlance-as-vmxnet won't be realized because udev will load = pcnet32 to drive the morphable vlance device (I think udev goes through modules = in alphabetical order). We've accomplished this in the past by adding a modprobe.d/modprobe.conf/modules.conf directive that forces "insmod = pcnet32" or "modprobe pcnet32" calls to first try and load the vmxnet module, and failing that, the pcnet32 module. I think this is covered in our = packaging guidelines. If you want your package to work both on real hardware and in a VMware = guest, you should make sure that loading pcnet32 means loading both vmxnet and pcnet32 (in that order). If the vmxnet module finds a real AMD vlance = device (which apparently exposes less IO ports than our morphable vlance), it = won't grab it and the subsequent loading of pcnet32 should succeed. > 2) Is there a rule of thumb on how to blacklist other=20 > ethernet drivers / > alias other drivers to vmxnet ? >=20 > I tried putting in an alias in /etc/modprobe.d (for=20 > e1000, since the > guest loads the "visible" ethernet device driver), which=20 > results in > e1000 still loading. >=20 > If I blacklist the e1000 driver, no network is operational. >=20 > Or is there no need at all to do this and it's just fine=20 > w/ e1000 and > it's safe to move the VM onto a different configured ESX host ? >=20 > I'm asking, since I do see "Adapter type: e1000" on one=20 > ESX host and > "Adapter type: flexible" on a different one. Right, "flexible" means "morphable vlance". If e1000 is exposed to the = guest, vmxnet doesn't need to be loaded at all. But since your package doesn't = know whether the guest will get e1000 or morphable vlance (or a real vmxnet backend, which the vmxnet module can drive), it should work for all = cases. When the VM is moved to a different ESX host (one that supports the = hardware version of the VM), the new host should expose the same virtual nics as before. |