|
From: Zachary A. <za...@vm...> - 2007-09-13 23:41:57
|
> > 2) The current thinking in the kernel community is that we'll converge > > on virtio based paravirtual devices. I haven't looked at the VMware > > transports deeply enough to know yet whether they would fit > > nicely into > > the virtio model but my suspicion is that they could. Is > > there interest > > in this? Actually, virtio does not look to be very useful for VMware. We don't have truly paravirtual devices. We have devices that function almost exactly and present exactly as real hardware devices. As such, we have none of the problems that virtio intends to solve - device discovery, hotplug and enumeration are already handled by the platform for USB, SCSI, IDE, or PCI, depending on how we present the device. We also have devices that present much more complex abstractions than the virtio layer is prepared to deal with. If we were to write a virtio block device backend today, it would take virtio requests, convert them into SCSI SCBs, and then issue them to our virtual SCSI card. That makes no sense. Virtio also complicates things quite a bit for us - now we have devices which present as virt-io devices on virt-io kernels, and as PCI devices elsewhere (perhaps they may present differently even on the same kernel, configured differently). I don't understand why we should go and rewrite our device code to use virtio because I don't see any tangible benefit for anyone in that endeavor. I see: 1) The likelihood of us having paravirt-only devices that present on non-hardware busses is near zero in I would guess the next 5 years. It seemse unlikely we would ever use a non-hardware bus interface unless it was adopted as a common cross-platform standard for virtualization. 2) We don't have any new discovery / configuration / enumeration problems that are specific to virtual devices on new busses. 3) Our devices are more complex than the current virtio models can support, meaning we lose functionality (CD / DVD control commands reduced to generic block device, etc..); and we also have to reimplement an ATA or SCSI like layer underneath virtio, which is wasteful in terms of code and performance. 4) Presenting our virtual devices differently to different kernels leads to either VM configuration problems (where user must choose PCI vs virtio, then is stuck with potentially unusuable kernels), or exposes the virtio layer to the problem of separating and cherry-picking devices out of PCI, USB, ... spaces to connect through virt-io transports. So we would either complicate the virtio code, or subvert PCI and USB detection through nefarious and ugly hacks. 5) We're exposed to more code churn (at least initially) in the virtio layer than the maintenance effort we have maintaining our existing drivers, presenting greater bug risk to us. It's not that virtio is a bad thing if you have true, virtual-only busses and devices, it's just this layer is totally unnecessary for us and it doesn't seem like anyone would benefit from us going down that route for any of our existing drivers. It might, perhaps, make sense for VHCI, but that remains to be seen. Zach |
|
From: Anthony L. <ali...@us...> - 2007-09-14 02:54:51
|
Zachary Amsden wrote: > Actually, virtio does not look to be very useful for VMware. We don't > have truly paravirtual devices. We have devices that function almost > exactly and present exactly as real hardware devices. As such, we have > none of the problems that virtio intends to solve - device discovery, > hotplug and enumeration are already handled by the platform for USB, > SCSI, IDE, or PCI, depending on how we present the device. > Hey Zach, Actually, I think you're making a bad assumption here. virtio does not mandate how discover happens. It's only interested in the transport protocol of the PV device (not the actual mechanism of transport). For KVM, the current direction is to actually implement virtio on top of a PCI device. The current implementation uses PCI for discovery but is using hypercalls mainly to avoid the fact that PCI interrupts require acking. I think this is something we may just end up biting though such that virtio devices wouldn't even use hypercalls. > 1) The likelihood of us having paravirt-only devices that present on > non-hardware busses is near zero in I would guess the next 5 years. It > seemse unlikely we would ever use a non-hardware bus interface unless it > was adopted as a common cross-platform standard for virtualization. > I am not interested in non-hardware busses. I think there's pretty wide agreement (with the exception of the s390 folks) that PCI is the way to go. > 2) We don't have any new discovery / configuration / enumeration > problems that are specific to virtual devices on new busses. > > 3) Our devices are more complex than the current virtio models can > support, meaning we lose functionality (CD / DVD control commands > reduced to generic block device, etc..); and we also have to reimplement > an ATA or SCSI like layer underneath virtio, which is wasteful in terms > of code and performance. > > 4) Presenting our virtual devices differently to different kernels leads > to either VM configuration problems (where user must choose PCI vs > virtio, then is stuck with potentially unusuable kernels), or exposes > the virtio layer to the problem of separating and cherry-picking devices > out of PCI, USB, ... spaces to connect through virt-io transports. So > we would either complicate the virtio code, or subvert PCI and USB > detection through nefarious and ugly hacks. > > 5) We're exposed to more code churn (at least initially) in the virtio > layer than the maintenance effort we have maintaining our existing > drivers, presenting greater bug risk to us. > > It's not that virtio is a bad thing if you have true, virtual-only > busses and devices, it's just this layer is totally unnecessary for us > and it doesn't seem like anyone would benefit from us going down that > route for any of our existing drivers. It might, perhaps, make sense > for VHCI, but that remains to be seen. > I don't think you should just drop your current drivers. However, I suspect that it would be more fruitful to focus on getting the virtio drivers up to your standards instead of attempting to get your existing upstream (if that is at all a goal). I think a lot of your concerns are valid for any VMM so participating in this area would be generally helpful. Regards, Anthony Liguori > Zach > > |
|
From: Zachary A. <za...@vm...> - 2007-09-14 09:46:28
|
On Thu, 2007-09-13 at 21:54 -0500, Anthony Liguori wrote: > Hey Zach, > > Actually, I think you're making a bad assumption here. virtio does > not > mandate how discover happens. It's only interested in the transport > protocol of the PV device (not the actual mechanism of transport). > For > KVM, the current direction is to actually implement virtio on top of > a > PCI device. The current implementation uses PCI for discovery but is > using hypercalls mainly to avoid the fact that PCI interrupts require > acking. I think this is something we may just end up biting though > such > that virtio devices wouldn't even use hypercalls. Okay, so how does virtio help at all? I guess I am missing the primary motivation for why it is a good thing. > > 3) Our devices are more complex than the current virtio models can > > support, meaning we lose functionality (CD / DVD control commands > > reduced to generic block device, etc..); and we also have to reimplement > > an ATA or SCSI like layer underneath virtio, which is wasteful in terms > > of code and performance. Wouldn't we still need to add code to emulate these layers beneath virtio if that is in fact how our virtual devices interact? This is the primary reason I see virtio as being unproductive for VMware. > I don't think you should just drop your current drivers. However, I > suspect that it would be more fruitful to focus on getting the virtio > drivers up to your standards instead of attempting to get your existing > upstream (if that is at all a goal). I think a lot of your concerns are > valid for any VMM so participating in this area would be generally helpful. Looking at the problem from a high level, with the assumption of keeping our current driver models, this seems to state that the best thing to do is adapt virtio into a layer that can meet the goals of all SCSI and IDE / CD / DVD / USB devices. But the problem is that Linux already has great layers of code to do that. Why are we re-inventing the wheel and encapsulating all this beneath another layer of interface when the real interfaces we want already exist? If we want a cleaner driver model that separates architecture and communication tranports from the driver implementation, shouldn't we be doing that at a higher level - one that actually has nothing at all to do with virtualization? Zach |
|
From: Anthony L. <an...@co...> - 2007-09-14 19:49:20
|
Zachary Amsden wrote: > On Thu, 2007-09-13 at 21:54 -0500, Anthony Liguori wrote: > > >> Hey Zach, >> >> Actually, I think you're making a bad assumption here. virtio does >> not >> mandate how discover happens. It's only interested in the transport >> protocol of the PV device (not the actual mechanism of transport). >> For >> KVM, the current direction is to actually implement virtio on top of >> a >> PCI device. The current implementation uses PCI for discovery but is >> using hypercalls mainly to avoid the fact that PCI interrupts require >> acking. I think this is something we may just end up biting though >> such >> that virtio devices wouldn't even use hypercalls. >> > > Okay, so how does virtio help at all? I guess I am missing the primary > motivation for why it is a good thing. > There are two questions I think. The first is whether you should invest in switching your current drivers over to virtio. This would be useful if you were looking to get something upstream as I don't think your drivers are dramatically different than what the virtio ones would look like. The second question is whether you're interested in virtio for things that you currently don't support. Since ya'll were talking about switching the drivers away from the current backdoor stuff, it seemed like that would be a good time to perhaps shim in virtio. Keep in mind, there doesn't have to be only one virtio network driver. The advantage of virtio is if you guys port your driver over to virtio, other hypervisors can potentially (easily) make use of it. >>> 3) Our devices are more complex than the current virtio models can >>> support, meaning we lose functionality (CD / DVD control commands >>> reduced to generic block device, etc..); and we also have to reimplement >>> an ATA or SCSI like layer underneath virtio, which is wasteful in terms >>> of code and performance. >>> > > Wouldn't we still need to add code to emulate these layers beneath > virtio if that is in fact how our virtual devices interact? This is the > primary reason I see virtio as being unproductive for VMware. > Rusty's current virtio block device provides SCSI pass through so things like CD/DVD control commands work. It's more sophisticated than the Xen driver in that regard. But I thought you guys don't use a paravirt block driver? I'm much less convinced that a paravirt block device offers a lot of advantages over SCSI emulation (but it exists b/c lguest doesn't want to do any emulation). >> I don't think you should just drop your current drivers. However, I >> suspect that it would be more fruitful to focus on getting the virtio >> drivers up to your standards instead of attempting to get your existing >> upstream (if that is at all a goal). I think a lot of your concerns are >> valid for any VMM so participating in this area would be generally helpful. >> > > Looking at the problem from a high level, with the assumption of keeping > our current driver models, this seems to state that the best thing to do > is adapt virtio into a layer that can meet the goals of all SCSI and > IDE / CD / DVD / USB devices. > > But the problem is that Linux already has great layers of code to do > that. Why are we re-inventing the wheel and encapsulating all this > beneath another layer of interface when the real interfaces we want > already exist? > > If we want a cleaner driver model that separates architecture and > communication tranports from the driver implementation, shouldn't we be > doing that at a higher level - one that actually has nothing at all to > do with virtualization? > I really don't see it that way. virtio as an abstraction makes sense from two perspectives. If you start with the assumption that most paravirt device drivers of the same type (and roughly the same function) are going have a bunch of roughly equivalent code, then virtio makes sense as a way for those various drivers to share that common code. If you start with the assumption that most VMMs end up using roughly the same sort of transport for all drivers, and that all those transports have roughly the same properties, then virtio is a very nice way to abstract that hypervisor interface so people can just target it and largely avoid having to implement the same driver for N different VMMs. But you guys have to do whatever makes sense for you. virtio is certainly not a solution for everything. Regards, Anthony Liguori > Zach > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > > |
|
From: Zachary A. <za...@vm...> - 2007-09-14 20:57:48
|
On Fri, 2007-09-14 at 14:49 -0500, Anthony Liguori wrote: > Zachary Amsden wrote: > > On Thu, 2007-09-13 at 21:54 -0500, Anthony Liguori wrote: > > > > > >> Hey Zach, > >> > >> Actually, I think you're making a bad assumption here. virtio does > >> not > >> mandate how discover happens. It's only interested in the transport > >> protocol of the PV device (not the actual mechanism of transport). > >> For > >> KVM, the current direction is to actually implement virtio on top of > >> a > >> PCI device. The current implementation uses PCI for discovery but is > >> using hypercalls mainly to avoid the fact that PCI interrupts require > >> acking. I think this is something we may just end up biting though > >> such > >> that virtio devices wouldn't even use hypercalls. > >> > > > > Okay, so how does virtio help at all? I guess I am missing the primary > > motivation for why it is a good thing. > > > > There are two questions I think. The first is whether you should invest > in switching your current drivers over to virtio. This would be useful > if you were looking to get something upstream as I don't think your > drivers are dramatically different than what the virtio ones would look > like. I don't know what the upstream plans are. It will require some phasing to get tools in sync with our product releases and into vendors hands. > The second question is whether you're interested in virtio for things > that you currently don't support. Since ya'll were talking about > switching the drivers away from the current backdoor stuff, it seemed > like that would be a good time to perhaps shim in virtio. Keep in mind, > there doesn't have to be only one virtio network driver. The advantage > of virtio is if you guys port your driver over to virtio, other > hypervisors can potentially (easily) make use of it. Okay, that is worth noting, nice to sponsor the design, but I'm not sure we want any extra churn or destabilization of our drivers. Bugs happen, and driver bugs happen a lot. Paravirt bugs happen a lot more as well - the complexity of the system has been increased. It seems like virtio unnecessarily exposes us to risk, when our standalone network driver module has a very much lower change rate. We've gone on and evolved our network driver much farther (started long before virtio), and maybe the next version of it might fit in with virtio, but I think the vmxnet driver as is will be pretty much frozen in stone in any relevant time frame. It's not worth rewriting the driver and increasing the maintenance burden of what will eventually become legacy. > Rusty's current virtio block device provides SCSI pass through so things > like CD/DVD control commands work. It's more sophisticated than the Xen > driver in that regard. But I thought you guys don't use a paravirt > block driver? I'm much less convinced that a paravirt block device > offers a lot of advantages over SCSI emulation (but it exists b/c lguest > doesn't want to do any emulation). We have an LSIlogic and a BusLogic SCSI controller. And a PIIX4 IDE controler. Very little point trying to make those work under virtio. > > If we want a cleaner driver model that separates architecture and > > communication tranports from the driver implementation, shouldn't we be > > doing that at a higher level - one that actually has nothing at all to > > do with virtualization? > > > > I really don't see it that way. virtio as an abstraction makes sense > from two perspectives. If you start with the assumption that most > paravirt device drivers of the same type (and roughly the same function) > are going have a bunch of roughly equivalent code, then virtio makes > sense as a way for those various drivers to share that common code. My point is, there is nothing inherently to do with paravirt about that. I consider it a device driver interface prettying, not a paravirt specific thing. So I think calling it virtio is somewhat misguided. Lots of device drivers of the same type and roughly the same function could share a library of roughly equivalent code. That is where I see this eventually going, and it looks like a gargantuan task of rewriting a bunch of drivers to have cool, clean, modular interfaces with a shared device library. It's a valuable cleanup and it might eventually reduce the overall number of bugs in the greater driver code base. But it doesn't actually accomplish anything spectacular for virtualization. > If you start with the assumption that most VMMs end up using roughly the > same sort of transport for all drivers, and that all those transports > have roughly the same properties, then virtio is a very nice way to > abstract that hypervisor interface so people can just target it and > largely avoid having to implement the same driver for N different VMMs. The same applies to hardware busses or architectures. It's basically modularizing a couple of driver classes to be cleaner and have transport independent interfaces. It's like writing a nice hardware free network driver that has a bunch of plugins to do the actual work, so the high level can focus on proper algorithms, hot-plug, discovery and all that. Exactly how a well thought out native driver should be designed. So if virtio gets it right, don't you see actual hardware drivers using virtio starting to arrive in the future? I would argue that is a good thing; but I would also argue it's a huge time-sink and it will probably not converge anytime soon. That makes it hard to justify converting our legacy drivers to virtio. > > But you guys have to do whatever makes sense for you. virtio is > certainly not a solution for everything. It's much harder to take existing drivers and mold them to virtio than the other way around. I expect we will keep virtio in mind as we work on new drivers though. I need to brush up on the PCI aspects of detection to make sure it would work for us - we need to selectively bridge individual native PCI devices to virtio drivers. If that can work, there is nothing to stop us from moving new devices to virtio. Such a thing might actually be costly, however. Virtio isn't in older kernels, and some new devices might be important to add to older kernels. Until the broad Linux base has virtio support, we might need to maintain drivers which work with both virtio and non-virtio kernels. Obviously the right answer is, "Use a newer kernel, knucklehead!". But That's a marketing cost basis decision that I have nothing to do with. Fortunately, as time progresses and older distros get phased out, the decision becomes easier. Thanks for the feedback, Zach |
|
From: Anthony L. <an...@co...> - 2007-09-17 21:05:06
|
Zachary Amsden wrote: >> The second question is whether you're interested in virtio for things >> that you currently don't support. Since ya'll were talking about >> switching the drivers away from the current backdoor stuff, it seemed >> like that would be a good time to perhaps shim in virtio. Keep in mind, >> there doesn't have to be only one virtio network driver. The advantage >> of virtio is if you guys port your driver over to virtio, other >> hypervisors can potentially (easily) make use of it. >> > > Okay, that is worth noting, nice to sponsor the design, but I'm not sure > we want any extra churn or destabilization of our drivers. Bugs happen, > and driver bugs happen a lot. Paravirt bugs happen a lot more as well - > the complexity of the system has been increased. It seems like virtio > unnecessarily exposes us to risk, when our standalone network driver > module has a very much lower change rate. We've gone on and evolved our > network driver much farther (started long before virtio), and maybe the > next version of it might fit in with virtio, but I think the vmxnet > driver as is will be pretty much frozen in stone in any relevant time > frame. It's not worth rewriting the driver and increasing the > maintenance burden of what will eventually become legacy. > Yeah, I don't expect that it's worth it to rewrite the vmxnet driver if you're already happy with it. >>> If we want a cleaner driver model that separates architecture and >>> communication tranports from the driver implementation, shouldn't we be >>> doing that at a higher level - one that actually has nothing at all to >>> do with virtualization? >>> >>> >> I really don't see it that way. virtio as an abstraction makes sense >> from two perspectives. If you start with the assumption that most >> paravirt device drivers of the same type (and roughly the same function) >> are going have a bunch of roughly equivalent code, then virtio makes >> sense as a way for those various drivers to share that common code. >> > > My point is, there is nothing inherently to do with paravirt about that. > Yes, but I think it's rare to have such flexibility in the communications protocol of devices which is why something like virtio hasn't been done before. >> But you guys have to do whatever makes sense for you. virtio is >> certainly not a solution for everything. >> > > It's much harder to take existing drivers and mold them to virtio than > the other way around. I expect we will keep virtio in mind as we work > on new drivers though. I need to brush up on the PCI aspects of > detection to make sure it would work for us - we need to selectively > bridge individual native PCI devices to virtio drivers. If that can > work, there is nothing to stop us from moving new devices to virtio. > Yes, I think this is important too. > Such a thing might actually be costly, however. Virtio isn't in older > kernels, and some new devices might be important to add to older > kernels. Until the broad Linux base has virtio support, we might need > to maintain drivers which work with both virtio and non-virtio kernels. > Or virtio has to be easily portable to older kernels. I think the rest of us are going to have the same requirements here. To boil this all down, my questions all came down to, if you're going to put significant effort into getting your drivers upstream, then I don't think it's significantly more effort to also look at integrating with the virtio effort. If you're not going to put effort into getting vmxnet upstream, then it wouldn't be worth it at all. Regards, Anthony Liguori > Obviously the right answer is, "Use a newer kernel, knucklehead!". But > That's a marketing cost basis decision that I have nothing to do with. > Fortunately, as time progresses and older distros get phased out, the > decision becomes easier. > > Thanks for the feedback, > > Zach > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > > |