You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(30) |
Oct
(50) |
Nov
(42) |
Dec
(17) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(36) |
Feb
(13) |
Mar
(74) |
Apr
(17) |
May
(62) |
Jun
(53) |
Jul
(32) |
Aug
(58) |
Sep
(44) |
Oct
(21) |
Nov
(35) |
Dec
(53) |
2009 |
Jan
(43) |
Feb
(58) |
Mar
(14) |
Apr
(16) |
May
(61) |
Jun
(49) |
Jul
(11) |
Aug
(22) |
Sep
(37) |
Oct
(12) |
Nov
(23) |
Dec
(10) |
2010 |
Jan
(21) |
Feb
(13) |
Mar
(5) |
Apr
(18) |
May
(14) |
Jun
(10) |
Jul
(1) |
Aug
|
Sep
(13) |
Oct
(8) |
Nov
(11) |
Dec
(14) |
2011 |
Jan
(13) |
Feb
(19) |
Mar
(16) |
Apr
(10) |
May
(22) |
Jun
(4) |
Jul
(63) |
Aug
(14) |
Sep
(10) |
Oct
(12) |
Nov
(10) |
Dec
(43) |
2012 |
Jan
(3) |
Feb
(4) |
Mar
(35) |
Apr
(1) |
May
(32) |
Jun
(8) |
Jul
(10) |
Aug
(6) |
Sep
(3) |
Oct
(25) |
Nov
(14) |
Dec
(4) |
2013 |
Jan
(12) |
Feb
(6) |
Mar
(15) |
Apr
(24) |
May
(9) |
Jun
(2) |
Jul
|
Aug
(4) |
Sep
|
Oct
(8) |
Nov
(3) |
Dec
|
2014 |
Jan
(5) |
Feb
|
Mar
(4) |
Apr
(2) |
May
(4) |
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
2015 |
Jan
|
Feb
(5) |
Mar
|
Apr
(1) |
May
(3) |
Jun
(1) |
Jul
(2) |
Aug
(5) |
Sep
|
Oct
|
Nov
(2) |
Dec
|
2017 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Zachary A. <za...@vm...> - 2007-09-14 20:57:48
|
On Fri, 2007-09-14 at 14:49 -0500, Anthony Liguori wrote: > Zachary Amsden wrote: > > On Thu, 2007-09-13 at 21:54 -0500, Anthony Liguori wrote: > > > > > >> Hey Zach, > >> > >> Actually, I think you're making a bad assumption here. virtio does > >> not > >> mandate how discover happens. It's only interested in the transport > >> protocol of the PV device (not the actual mechanism of transport). > >> For > >> KVM, the current direction is to actually implement virtio on top of > >> a > >> PCI device. The current implementation uses PCI for discovery but is > >> using hypercalls mainly to avoid the fact that PCI interrupts require > >> acking. I think this is something we may just end up biting though > >> such > >> that virtio devices wouldn't even use hypercalls. > >> > > > > Okay, so how does virtio help at all? I guess I am missing the primary > > motivation for why it is a good thing. > > > > There are two questions I think. The first is whether you should invest > in switching your current drivers over to virtio. This would be useful > if you were looking to get something upstream as I don't think your > drivers are dramatically different than what the virtio ones would look > like. I don't know what the upstream plans are. It will require some phasing to get tools in sync with our product releases and into vendors hands. > The second question is whether you're interested in virtio for things > that you currently don't support. Since ya'll were talking about > switching the drivers away from the current backdoor stuff, it seemed > like that would be a good time to perhaps shim in virtio. Keep in mind, > there doesn't have to be only one virtio network driver. The advantage > of virtio is if you guys port your driver over to virtio, other > hypervisors can potentially (easily) make use of it. Okay, that is worth noting, nice to sponsor the design, but I'm not sure we want any extra churn or destabilization of our drivers. Bugs happen, and driver bugs happen a lot. Paravirt bugs happen a lot more as well - the complexity of the system has been increased. It seems like virtio unnecessarily exposes us to risk, when our standalone network driver module has a very much lower change rate. We've gone on and evolved our network driver much farther (started long before virtio), and maybe the next version of it might fit in with virtio, but I think the vmxnet driver as is will be pretty much frozen in stone in any relevant time frame. It's not worth rewriting the driver and increasing the maintenance burden of what will eventually become legacy. > Rusty's current virtio block device provides SCSI pass through so things > like CD/DVD control commands work. It's more sophisticated than the Xen > driver in that regard. But I thought you guys don't use a paravirt > block driver? I'm much less convinced that a paravirt block device > offers a lot of advantages over SCSI emulation (but it exists b/c lguest > doesn't want to do any emulation). We have an LSIlogic and a BusLogic SCSI controller. And a PIIX4 IDE controler. Very little point trying to make those work under virtio. > > If we want a cleaner driver model that separates architecture and > > communication tranports from the driver implementation, shouldn't we be > > doing that at a higher level - one that actually has nothing at all to > > do with virtualization? > > > > I really don't see it that way. virtio as an abstraction makes sense > from two perspectives. If you start with the assumption that most > paravirt device drivers of the same type (and roughly the same function) > are going have a bunch of roughly equivalent code, then virtio makes > sense as a way for those various drivers to share that common code. My point is, there is nothing inherently to do with paravirt about that. I consider it a device driver interface prettying, not a paravirt specific thing. So I think calling it virtio is somewhat misguided. Lots of device drivers of the same type and roughly the same function could share a library of roughly equivalent code. That is where I see this eventually going, and it looks like a gargantuan task of rewriting a bunch of drivers to have cool, clean, modular interfaces with a shared device library. It's a valuable cleanup and it might eventually reduce the overall number of bugs in the greater driver code base. But it doesn't actually accomplish anything spectacular for virtualization. > If you start with the assumption that most VMMs end up using roughly the > same sort of transport for all drivers, and that all those transports > have roughly the same properties, then virtio is a very nice way to > abstract that hypervisor interface so people can just target it and > largely avoid having to implement the same driver for N different VMMs. The same applies to hardware busses or architectures. It's basically modularizing a couple of driver classes to be cleaner and have transport independent interfaces. It's like writing a nice hardware free network driver that has a bunch of plugins to do the actual work, so the high level can focus on proper algorithms, hot-plug, discovery and all that. Exactly how a well thought out native driver should be designed. So if virtio gets it right, don't you see actual hardware drivers using virtio starting to arrive in the future? I would argue that is a good thing; but I would also argue it's a huge time-sink and it will probably not converge anytime soon. That makes it hard to justify converting our legacy drivers to virtio. > > But you guys have to do whatever makes sense for you. virtio is > certainly not a solution for everything. It's much harder to take existing drivers and mold them to virtio than the other way around. I expect we will keep virtio in mind as we work on new drivers though. I need to brush up on the PCI aspects of detection to make sure it would work for us - we need to selectively bridge individual native PCI devices to virtio drivers. If that can work, there is nothing to stop us from moving new devices to virtio. Such a thing might actually be costly, however. Virtio isn't in older kernels, and some new devices might be important to add to older kernels. Until the broad Linux base has virtio support, we might need to maintain drivers which work with both virtio and non-virtio kernels. Obviously the right answer is, "Use a newer kernel, knucklehead!". But That's a marketing cost basis decision that I have nothing to do with. Fortunately, as time progresses and older distros get phased out, the decision becomes easier. Thanks for the feedback, Zach |
From: Anthony L. <an...@co...> - 2007-09-14 19:49:20
|
Zachary Amsden wrote: > On Thu, 2007-09-13 at 21:54 -0500, Anthony Liguori wrote: > > >> Hey Zach, >> >> Actually, I think you're making a bad assumption here. virtio does >> not >> mandate how discover happens. It's only interested in the transport >> protocol of the PV device (not the actual mechanism of transport). >> For >> KVM, the current direction is to actually implement virtio on top of >> a >> PCI device. The current implementation uses PCI for discovery but is >> using hypercalls mainly to avoid the fact that PCI interrupts require >> acking. I think this is something we may just end up biting though >> such >> that virtio devices wouldn't even use hypercalls. >> > > Okay, so how does virtio help at all? I guess I am missing the primary > motivation for why it is a good thing. > There are two questions I think. The first is whether you should invest in switching your current drivers over to virtio. This would be useful if you were looking to get something upstream as I don't think your drivers are dramatically different than what the virtio ones would look like. The second question is whether you're interested in virtio for things that you currently don't support. Since ya'll were talking about switching the drivers away from the current backdoor stuff, it seemed like that would be a good time to perhaps shim in virtio. Keep in mind, there doesn't have to be only one virtio network driver. The advantage of virtio is if you guys port your driver over to virtio, other hypervisors can potentially (easily) make use of it. >>> 3) Our devices are more complex than the current virtio models can >>> support, meaning we lose functionality (CD / DVD control commands >>> reduced to generic block device, etc..); and we also have to reimplement >>> an ATA or SCSI like layer underneath virtio, which is wasteful in terms >>> of code and performance. >>> > > Wouldn't we still need to add code to emulate these layers beneath > virtio if that is in fact how our virtual devices interact? This is the > primary reason I see virtio as being unproductive for VMware. > Rusty's current virtio block device provides SCSI pass through so things like CD/DVD control commands work. It's more sophisticated than the Xen driver in that regard. But I thought you guys don't use a paravirt block driver? I'm much less convinced that a paravirt block device offers a lot of advantages over SCSI emulation (but it exists b/c lguest doesn't want to do any emulation). >> I don't think you should just drop your current drivers. However, I >> suspect that it would be more fruitful to focus on getting the virtio >> drivers up to your standards instead of attempting to get your existing >> upstream (if that is at all a goal). I think a lot of your concerns are >> valid for any VMM so participating in this area would be generally helpful. >> > > Looking at the problem from a high level, with the assumption of keeping > our current driver models, this seems to state that the best thing to do > is adapt virtio into a layer that can meet the goals of all SCSI and > IDE / CD / DVD / USB devices. > > But the problem is that Linux already has great layers of code to do > that. Why are we re-inventing the wheel and encapsulating all this > beneath another layer of interface when the real interfaces we want > already exist? > > If we want a cleaner driver model that separates architecture and > communication tranports from the driver implementation, shouldn't we be > doing that at a higher level - one that actually has nothing at all to > do with virtualization? > I really don't see it that way. virtio as an abstraction makes sense from two perspectives. If you start with the assumption that most paravirt device drivers of the same type (and roughly the same function) are going have a bunch of roughly equivalent code, then virtio makes sense as a way for those various drivers to share that common code. If you start with the assumption that most VMMs end up using roughly the same sort of transport for all drivers, and that all those transports have roughly the same properties, then virtio is a very nice way to abstract that hypervisor interface so people can just target it and largely avoid having to implement the same driver for N different VMMs. But you guys have to do whatever makes sense for you. virtio is certainly not a solution for everything. Regards, Anthony Liguori > Zach > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > > |
From: Zachary A. <za...@vm...> - 2007-09-14 09:46:28
|
On Thu, 2007-09-13 at 21:54 -0500, Anthony Liguori wrote: > Hey Zach, > > Actually, I think you're making a bad assumption here. virtio does > not > mandate how discover happens. It's only interested in the transport > protocol of the PV device (not the actual mechanism of transport). > For > KVM, the current direction is to actually implement virtio on top of > a > PCI device. The current implementation uses PCI for discovery but is > using hypercalls mainly to avoid the fact that PCI interrupts require > acking. I think this is something we may just end up biting though > such > that virtio devices wouldn't even use hypercalls. Okay, so how does virtio help at all? I guess I am missing the primary motivation for why it is a good thing. > > 3) Our devices are more complex than the current virtio models can > > support, meaning we lose functionality (CD / DVD control commands > > reduced to generic block device, etc..); and we also have to reimplement > > an ATA or SCSI like layer underneath virtio, which is wasteful in terms > > of code and performance. Wouldn't we still need to add code to emulate these layers beneath virtio if that is in fact how our virtual devices interact? This is the primary reason I see virtio as being unproductive for VMware. > I don't think you should just drop your current drivers. However, I > suspect that it would be more fruitful to focus on getting the virtio > drivers up to your standards instead of attempting to get your existing > upstream (if that is at all a goal). I think a lot of your concerns are > valid for any VMM so participating in this area would be generally helpful. Looking at the problem from a high level, with the assumption of keeping our current driver models, this seems to state that the best thing to do is adapt virtio into a layer that can meet the goals of all SCSI and IDE / CD / DVD / USB devices. But the problem is that Linux already has great layers of code to do that. Why are we re-inventing the wheel and encapsulating all this beneath another layer of interface when the real interfaces we want already exist? If we want a cleaner driver model that separates architecture and communication tranports from the driver implementation, shouldn't we be doing that at a higher level - one that actually has nothing at all to do with virtualization? Zach |
From: Anthony L. <ali...@us...> - 2007-09-14 02:54:51
|
Zachary Amsden wrote: > Actually, virtio does not look to be very useful for VMware. We don't > have truly paravirtual devices. We have devices that function almost > exactly and present exactly as real hardware devices. As such, we have > none of the problems that virtio intends to solve - device discovery, > hotplug and enumeration are already handled by the platform for USB, > SCSI, IDE, or PCI, depending on how we present the device. > Hey Zach, Actually, I think you're making a bad assumption here. virtio does not mandate how discover happens. It's only interested in the transport protocol of the PV device (not the actual mechanism of transport). For KVM, the current direction is to actually implement virtio on top of a PCI device. The current implementation uses PCI for discovery but is using hypercalls mainly to avoid the fact that PCI interrupts require acking. I think this is something we may just end up biting though such that virtio devices wouldn't even use hypercalls. > 1) The likelihood of us having paravirt-only devices that present on > non-hardware busses is near zero in I would guess the next 5 years. It > seemse unlikely we would ever use a non-hardware bus interface unless it > was adopted as a common cross-platform standard for virtualization. > I am not interested in non-hardware busses. I think there's pretty wide agreement (with the exception of the s390 folks) that PCI is the way to go. > 2) We don't have any new discovery / configuration / enumeration > problems that are specific to virtual devices on new busses. > > 3) Our devices are more complex than the current virtio models can > support, meaning we lose functionality (CD / DVD control commands > reduced to generic block device, etc..); and we also have to reimplement > an ATA or SCSI like layer underneath virtio, which is wasteful in terms > of code and performance. > > 4) Presenting our virtual devices differently to different kernels leads > to either VM configuration problems (where user must choose PCI vs > virtio, then is stuck with potentially unusuable kernels), or exposes > the virtio layer to the problem of separating and cherry-picking devices > out of PCI, USB, ... spaces to connect through virt-io transports. So > we would either complicate the virtio code, or subvert PCI and USB > detection through nefarious and ugly hacks. > > 5) We're exposed to more code churn (at least initially) in the virtio > layer than the maintenance effort we have maintaining our existing > drivers, presenting greater bug risk to us. > > It's not that virtio is a bad thing if you have true, virtual-only > busses and devices, it's just this layer is totally unnecessary for us > and it doesn't seem like anyone would benefit from us going down that > route for any of our existing drivers. It might, perhaps, make sense > for VHCI, but that remains to be seen. > I don't think you should just drop your current drivers. However, I suspect that it would be more fruitful to focus on getting the virtio drivers up to your standards instead of attempting to get your existing upstream (if that is at all a goal). I think a lot of your concerns are valid for any VMM so participating in this area would be generally helpful. Regards, Anthony Liguori > Zach > > |
From: Zachary A. <za...@vm...> - 2007-09-13 23:41:57
|
> > 2) The current thinking in the kernel community is that we'll converge > > on virtio based paravirtual devices. I haven't looked at the VMware > > transports deeply enough to know yet whether they would fit > > nicely into > > the virtio model but my suspicion is that they could. Is > > there interest > > in this? Actually, virtio does not look to be very useful for VMware. We don't have truly paravirtual devices. We have devices that function almost exactly and present exactly as real hardware devices. As such, we have none of the problems that virtio intends to solve - device discovery, hotplug and enumeration are already handled by the platform for USB, SCSI, IDE, or PCI, depending on how we present the device. We also have devices that present much more complex abstractions than the virtio layer is prepared to deal with. If we were to write a virtio block device backend today, it would take virtio requests, convert them into SCSI SCBs, and then issue them to our virtual SCSI card. That makes no sense. Virtio also complicates things quite a bit for us - now we have devices which present as virt-io devices on virt-io kernels, and as PCI devices elsewhere (perhaps they may present differently even on the same kernel, configured differently). I don't understand why we should go and rewrite our device code to use virtio because I don't see any tangible benefit for anyone in that endeavor. I see: 1) The likelihood of us having paravirt-only devices that present on non-hardware busses is near zero in I would guess the next 5 years. It seemse unlikely we would ever use a non-hardware bus interface unless it was adopted as a common cross-platform standard for virtualization. 2) We don't have any new discovery / configuration / enumeration problems that are specific to virtual devices on new busses. 3) Our devices are more complex than the current virtio models can support, meaning we lose functionality (CD / DVD control commands reduced to generic block device, etc..); and we also have to reimplement an ATA or SCSI like layer underneath virtio, which is wasteful in terms of code and performance. 4) Presenting our virtual devices differently to different kernels leads to either VM configuration problems (where user must choose PCI vs virtio, then is stuck with potentially unusuable kernels), or exposes the virtio layer to the problem of separating and cherry-picking devices out of PCI, USB, ... spaces to connect through virt-io transports. So we would either complicate the virtio code, or subvert PCI and USB detection through nefarious and ugly hacks. 5) We're exposed to more code churn (at least initially) in the virtio layer than the maintenance effort we have maintaining our existing drivers, presenting greater bug risk to us. It's not that virtio is a bad thing if you have true, virtual-only busses and devices, it's just this layer is totally unnecessary for us and it doesn't seem like anyone would benefit from us going down that route for any of our existing drivers. It might, perhaps, make sense for VHCI, but that remains to be seen. Zach |
From: Anthony L. <an...@co...> - 2007-09-13 22:31:23
|
Ragavan S wrote: > Hi Anthony, > > I wanted to get some clarification from you about something. In a > previous message you said: > > " What would be ideal is if VMware could provide a VM-Tools ISO for > Windows that contained the various PV drivers with a non-restrictive > EULA. Namely, that it could be redistributed by anyone and could be > used for anything. Providing source code for Windows drivers is not > easy as far as I've been told so I think it's understandable if it was > binary-only." > > What are the potential usage scenarios you are trying to address here? > Are you looking to bundle these Windows binaries as is so that once > you port the linux/unix drivers over to your platform, you can then > bundle them all together and provide virtual machine tools for linux, > unix AND windows guests for your platform? Are you planning on writing > some wrappers around these binaries? Or do you have some other use > cases in mind? I want to be add support to QEMU for open-vm-tools. Actually, I've already done that for the most part. I've got patches that are working and we already support vmmouse and the VMware VGA interface. We can currently use the Open Source version of these tools/drivers on Linux quite happily. It would be really nice if we could use the Windows drivers too. We can just compile open-vm-tools for Windows (I haven't tried, but it looks possible). VMware has not release the source though for the VGA and vmmouse drivers though. I understand that the Windows DDK makes it difficult to release source for Windows drivers. So what I would like to see, if for VMware to make available the current binary Windows drivers for things like VMware VGA and vmmouse under a license that would allow redistribution and use in something other than VMware (namely, QEMU/KVM). They already work under QEMU it's just that the EULA prevents them from being used legally. The practical effect of this would be that instead of having to develop our own paravirtual graphics driver, we (QEMU/KVM) could just standardize on the VMware VGA driver. Without Windows guest support, that's not really a viable option. > The more detail you can provide the better. It will be useful to our > legal person as well(especially if the EULA needs to be modified). I hope that's clear, please let me know if you need any more info. > Thanks, > Ragavan > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > ------------------------------------------------------------------------ > > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > |
From: Ragavan S <rag...@gm...> - 2007-09-13 22:09:01
|
Hi Anthony, I wanted to get some clarification from you about something. In a previous message you said: " What would be ideal is if VMware could provide a VM-Tools ISO for Windows that contained the various PV drivers with a non-restrictive EULA. Namely, that it could be redistributed by anyone and could be used for anything. Providing source code for Windows drivers is not easy as far as I've been told so I think it's understandable if it was binary-only." What are the potential usage scenarios you are trying to address here? Are you looking to bundle these Windows binaries as is so that once you port the linux/unix drivers over to your platform, you can then bundle them all together and provide virtual machine tools for linux, unix AND windows guests for your platform? Are you planning on writing some wrappers around these binaries? Or do you have some other use cases in mind? The more detail you can provide the better. It will be useful to our legal person as well(especially if the EULA needs to be modified). Thanks, Ragavan |
From: Anthony L. <an...@co...> - 2007-09-12 03:03:33
|
Petr Vandrovec wrote: > Adar Dembo wrote: > > >> I'm playing around with guestd in QEMU and I've noticed that iopl/ioperm >> >> aren't used by anything before doing PIO operations. >> >> I figure this works in VMware b/c you guys are intercepting the backdoor >> io port regardless of CPL/IOPL. While this is useful for OSes like >> Windows that have no way to change iopl, it would be nice on Posix >> platforms if you did actually use iopl appropriately. >> >> It's a whole lot easier to just take a vmexit for PIO than it is to >> intercept #gp and try to decode whether it was caused by a ring 3 PIO >> instruction. >> > > Hello Anthony, > there are two reasons why we allow backdoor port to be accessed from > CPL3: > > (1) We want to be able to access it even from non-suid applications - > for example copy/paste daemon runs under normal user account, and so it > cannot do iopl(3). > > (2) When not running in hardware assisted mode binary translation (or > simulation) has to be used for CPL3 level code, which causes huge > performance impact. > > If it is problem for qemu then perhaps creating kernel module to provide > access to backdoor is simplest way to address problem, and it will be > compatible with all VMware products as well. Well, long term, I'd like to move to a virtio based socket. It's hard to say how that would intersect with something like VMCI because I don't know anything about it :-) I think in the interim, a simple root daemon that allows backdoor operations via a domain socket or something would suffice. Then non root users can still issue backdoor operations but the daemon can still use iopl(). Regards, Anthony Liguori > Only restriction is that > some of our backdoors are accessible to CPL0 code only already, so > driver just cannot blindly issue backdoor call with registers it > received from userspace, but that should not be complicated thing to > address. > > For future products we want to use VMCI, which comes with regular kernel > driver, and userspace library which can be accessed without IOPL > elevation or any tricks in the emulation (well, except that it still > does not use regular I/O instructions). Unfortunately latest released > product (WS6) does not provide any interesting service over VMCI, and > VMCI even did not make into this opensource release (you can take a look > at WS6, but that one is *not* GPLed). > > You can add 'monitor_control.restrict_backdoor = "TRUE"' to VM's > configuration file when using VMware, and then you should observe > exactly same behavior you see with QEMU - guestd and everybody else > crashing. > Best regards, > Petr Vandrovec > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > > |
From: Petr V. <pe...@vm...> - 2007-09-12 02:52:44
|
Adar Dembo wrote: > I'm playing around with guestd in QEMU and I've noticed that iopl/ioperm > > aren't used by anything before doing PIO operations. > > I figure this works in VMware b/c you guys are intercepting the backdoor > io port regardless of CPL/IOPL. While this is useful for OSes like > Windows that have no way to change iopl, it would be nice on Posix > platforms if you did actually use iopl appropriately. > > It's a whole lot easier to just take a vmexit for PIO than it is to > intercept #gp and try to decode whether it was caused by a ring 3 PIO > instruction. Hello Anthony, there are two reasons why we allow backdoor port to be accessed from CPL3: (1) We want to be able to access it even from non-suid applications - for example copy/paste daemon runs under normal user account, and so it cannot do iopl(3). (2) When not running in hardware assisted mode binary translation (or simulation) has to be used for CPL3 level code, which causes huge performance impact. If it is problem for qemu then perhaps creating kernel module to provide access to backdoor is simplest way to address problem, and it will be compatible with all VMware products as well. Only restriction is that some of our backdoors are accessible to CPL0 code only already, so driver just cannot blindly issue backdoor call with registers it received from userspace, but that should not be complicated thing to address. For future products we want to use VMCI, which comes with regular kernel driver, and userspace library which can be accessed without IOPL elevation or any tricks in the emulation (well, except that it still does not use regular I/O instructions). Unfortunately latest released product (WS6) does not provide any interesting service over VMCI, and VMCI even did not make into this opensource release (you can take a look at WS6, but that one is *not* GPLed). You can add 'monitor_control.restrict_backdoor = "TRUE"' to VM's configuration file when using VMware, and then you should observe exactly same behavior you see with QEMU - guestd and everybody else crashing. Best regards, Petr Vandrovec |
From: Anthony L. <an...@co...> - 2007-09-12 02:51:50
|
Adar Dembo wrote: >> If my understanding of vmblock is correct, I strongly doubt >> it would be >> well received on LKML which is why I ask about alternatives :-) >> > > Our DnD/FCP implementations are gtk-based and rely on a physical file copy > from one execution environment (host or guest) and into another, usually into > a staging directory. Because you can copy a file of arbitrary size, the > operation can take a long time. Unfortunately for us, gtk will abort DnD > operations if too much time elapses (hard-coded via DROP_ABORT_TIME, I > think). > > This means we need to complete the DnD operation quickly, then somehow block > the target application so that it doesn't try and access the file that was > dropped until the transfer is done. I don't think advisory locks would work > here, because there's no guarantee that the target application will obey > them. Indeed, I doubt any applications will check for an advisory lock prior > to accessing a dropped file, though I have no evidence to support this claim. > Ah, this makes a lot more sense! Thanks for the explanation. > So, one way of solving this problem is via a filesystem like vmblock. Another > way might be by giving the target application an HTTP-based URI, then run an > HTTP server in vmware-user and hope that the target app won't mind when we > block it on a socket read while we complete the file transfer. We've > discussed this approach a bit, but we're worried that not all apps will > understand HTTP-based URIs. > > I think we could accomplish the same goal using a stackable filesystem or > using FUSE, but we wanted to fix this for older 2.4 kernels too, and support > for FUSE and stackable filesystems isn't very good in such kernels. > > Note that vmblock is also used by the Linux Workstation UI on the host for > the same purpose. > Very interesting problem, I'll look and see if I can't find a clever solution. Regards, Anthony Liguori > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > > |
From: Anthony L. <an...@co...> - 2007-09-12 02:48:28
|
Adar Dembo wrote: > Most of the people behind this project are busy with VMworld this week, but > I'll try to answer your questions as best I can. > No worries, I'm not surprised ya'll are busy :-) >> 2) The current thinking in the kernel community is that we'll converge >> on virtio based paravirtual devices. I haven't looked at the VMware >> transports deeply enough to know yet whether they would fit >> nicely into >> the virtio model but my suspicion is that they could. Is >> there interest >> in this? >> > > I think there's definitely interest in using virtio, though I'm not familiar > enough with virtio (or with some of our virtual devices) to really judge > whether it'll be easy or not. Here's a brief description of the VMware > transports I am familiar with: > > The majority of the Tools currently rely on a single transport mechanism > which is (somewhat inappropriately) called the 'Tools backdoor'. It uses > 'out' (4 bytes) or 'rep out' (up to a page, I think) to send data to the VMM > and is callable at any CPL. It's inl/ins actually. > We've built 'GuestMsg' on top of the backdoor, > which can exchange arbitrarily-long data with the VMM, and then 'GuestRpc' on > top of GuestMsg, which provides a very primitive string-baed RPC mechanism. > The backdoor itself isn't interrupt driven, so apps must poll for receipt of > messages (GuestRpc provides some facilities to do that). > > The backdoor-based transport is quite old and inflexible, so we built VMCI > (Virtual Machine Communication Interface) as a replacement. I'm not very > familiar with it, but I do know that it's a PCI device that exports > hypercalls to the host, guestcalls back to the guest, and higher level APIs > (such as shared memory and datagrams). We haven't yet transitioned any of the > Tools to run on VMCI, because the VMCI APIs (and the design, to an extent) > are still under development, which is, incidentally, also why it was marked > 'experimental' in Workstation 6 and why the guest driver isn't yet open > sourced. > The backdoor stuff is pretty gnarly since it relies on something that isn't available on other architectures (PIO in userspace). One thing I'm interested in is how portable all this stuff is to non-x86 architectures. A PCI device with something exposed to userspace (preferably as a socket) would definitely be best. > As I said above, all of the Tools components use the backdoor or one of its > higher layers. The exception is vmxnet, which is also implemented as a > paravirtualized PCI device but, once again, I don't know enough about it to > be of much use. > > I'll try and get you some more concrete information about integrating with > virtio (or at least put you in touch with the appropriate developers). > > >> 3) The last time I checked, the EULA around the VMTools for other >> platforms (namely, Windows), prevented the use of the drivers in other >> VMMs. That limits the utility of implementing these >> interfaces in other >> VMMs. Are there any plans to adjust the EULA for other platforms? >> > > I don't think this should be an issue, because we manage the EULAs on a > per-product basis. So the EULA that applies to, say, Workstation 6, should > not apply to this release (or the drivers therein) at all. > > I'll double check with our legal team and get back to you on this. > What would be ideal is if VMware could provide a VM-Tools ISO for Windows that contained the various PV drivers with a non-restrictive EULA. Namely, that it could be redistributed by anyone and could be used for anything. Providing source code for Windows drivers is not easy as far as I've been told so I think it's understandable if it was binary-only. Regards, Anthony Liguori > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2005. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > open-vm-tools-devel mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/open-vm-tools-devel > > |
From: Adar D. <ad...@vm...> - 2007-09-12 02:25:21
|
> I'm going through the vm-tools code now and I think I'm=20 > starting to get=20 > my head around them. The one bit that still puzzles me is why the=20 > "vmblock" driver is needed. I see that it is only used for DnD and=20 > copy/paste of a file. >=20 > As far as I can tell, vmblock implements a mandatory file=20 > locking that=20 > is guaranteed to work on any filesystem. I can understand=20 > why one might=20 > want to do this in general but I can't seem to figure out a=20 > reason why=20 > DnD or copy/paste is special here verses any other application. >=20 > Am I correct in assuming that this could be changed to doing posix=20 > advisory locking and for filesystems that don't support it, just live=20 > with the potential race conditions? >=20 > If my understanding of vmblock is correct, I strongly doubt=20 > it would be=20 > well received on LKML which is why I ask about alternatives :-) Our DnD/FCP implementations are gtk-based and rely on a physical file = copy from one execution environment (host or guest) and into another, usually = into a staging directory. Because you can copy a file of arbitrary size, the operation can take a long time. Unfortunately for us, gtk will abort DnD operations if too much time elapses (hard-coded via DROP_ABORT_TIME, I think).=20 This means we need to complete the DnD operation quickly, then somehow = block the target application so that it doesn't try and access the file that = was dropped until the transfer is done. I don't think advisory locks would = work here, because there's no guarantee that the target application will obey them. Indeed, I doubt any applications will check for an advisory lock = prior to accessing a dropped file, though I have no evidence to support this = claim. So, one way of solving this problem is via a filesystem like vmblock. = Another way might be by giving the target application an HTTP-based URI, then = run an HTTP server in vmware-user and hope that the target app won't mind when = we block it on a socket read while we complete the file transfer. We've discussed this approach a bit, but we're worried that not all apps will understand HTTP-based URIs. I think we could accomplish the same goal using a stackable filesystem = or using FUSE, but we wanted to fix this for older 2.4 kernels too, and = support for FUSE and stackable filesystems isn't very good in such kernels. Note that vmblock is also used by the Linux Workstation UI on the host = for the same purpose. |
From: Adar D. <ad...@vm...> - 2007-09-12 01:48:10
|
Most of the people behind this project are busy with VMworld this week, = but I'll try to answer your questions as best I can. > First, let me start by thanking you all for publishing these. We > currently implement vmmouse and vmware vga emulation in QEMU and I'd > love to do more now that more guest drivers are available. That's awesome, we're definitely looking forward to porting the drivers = (and other components, if they'd prove useful) to other hypervisors. > I have three questions: >=20 > 1) A lot of the kernel code appears to not be written for upstream > inclusion. Are there plans to work on getting the drivers more into > shape so they could realistically be merged into Linux? Yes, that's definitely the plan, but as you said, a lot of work needs to = be done. The fairly simple but tedious tasks: 1) The VMware coding style is much different than the Linux kernel = coding style. Maybe we can automatically convert this. 2) There's a ton of code in the drivers to handle older kernels that = isn't necessary upstream. Then there may be some more difficult design issues (like the one you = raised about vmblock in a later e-mail) that need to be looked at. If you feel like taking this on, we'd definitely appreciate the = patches... > 2) The current thinking in the kernel community is that we'll converge > on virtio based paravirtual devices. I haven't looked at the VMware > transports deeply enough to know yet whether they would fit=20 > nicely into > the virtio model but my suspicion is that they could. Is=20 > there interest > in this? I think there's definitely interest in using virtio, though I'm not = familiar enough with virtio (or with some of our virtual devices) to really judge whether it'll be easy or not. Here's a brief description of the VMware transports I am familiar with: The majority of the Tools currently rely on a single transport mechanism which is (somewhat inappropriately) called the 'Tools backdoor'. It uses 'out' (4 bytes) or 'rep out' (up to a page, I think) to send data to the = VMM and is callable at any CPL. We've built 'GuestMsg' on top of the = backdoor, which can exchange arbitrarily-long data with the VMM, and then = 'GuestRpc' on top of GuestMsg, which provides a very primitive string-baed RPC = mechanism. The backdoor itself isn't interrupt driven, so apps must poll for = receipt of messages (GuestRpc provides some facilities to do that). The backdoor-based transport is quite old and inflexible, so we built = VMCI (Virtual Machine Communication Interface) as a replacement. I'm not very familiar with it, but I do know that it's a PCI device that exports hypercalls to the host, guestcalls back to the guest, and higher level = APIs (such as shared memory and datagrams). We haven't yet transitioned any = of the Tools to run on VMCI, because the VMCI APIs (and the design, to an = extent) are still under development, which is, incidentally, also why it was = marked 'experimental' in Workstation 6 and why the guest driver isn't yet open sourced. As I said above, all of the Tools components use the backdoor or one of = its higher layers. The exception is vmxnet, which is also implemented as a paravirtualized PCI device but, once again, I don't know enough about it = to be of much use.=20 I'll try and get you some more concrete information about integrating = with virtio (or at least put you in touch with the appropriate developers). > 3) The last time I checked, the EULA around the VMTools for other > platforms (namely, Windows), prevented the use of the drivers in other > VMMs. That limits the utility of implementing these=20 > interfaces in other > VMMs. Are there any plans to adjust the EULA for other platforms? I don't think this should be an issue, because we manage the EULAs on a per-product basis. So the EULA that applies to, say, Workstation 6, = should not apply to this release (or the drivers therein) at all. I'll double check with our legal team and get back to you on this. |
From: Anthony L. <an...@co...> - 2007-09-11 22:52:34
|
I'm playing around with guestd in QEMU and I've noticed that iopl/ioperm aren't used by anything before doing PIO operations. I figure this works in VMware b/c you guys are intercepting the backdoor io port regardless of CPL/IOPL. While this is useful for OSes like Windows that have no way to change iopl, it would be nice on Posix platforms if you did actually use iopl appropriately. It's a whole lot easier to just take a vmexit for PIO than it is to intercept #gp and try to decode whether it was caused by a ring 3 PIO instruction. Regards, Anthony Liguori |
From: Anthony L. <an...@co...> - 2007-09-11 20:43:27
|
Hi, I'm going through the vm-tools code now and I think I'm starting to get my head around them. The one bit that still puzzles me is why the "vmblock" driver is needed. I see that it is only used for DnD and copy/paste of a file. As far as I can tell, vmblock implements a mandatory file locking that is guaranteed to work on any filesystem. I can understand why one might want to do this in general but I can't seem to figure out a reason why DnD or copy/paste is special here verses any other application. Am I correct in assuming that this could be changed to doing posix advisory locking and for filesystems that don't support it, just live with the potential race conditions? If my understanding of vmblock is correct, I strongly doubt it would be well received on LKML which is why I ask about alternatives :-) Regards, Anthony Liguori |
From: Anthony L. <an...@co...> - 2007-09-11 18:51:52
|
First, let me start by thanking you all for publishing these. We currently implement vmmouse and vmware vga emulation in QEMU and I'd love to do more now that more guest drivers are available. I have three questions: 1) A lot of the kernel code appears to not be written for upstream inclusion. Are there plans to work on getting the drivers more into shape so they could realistically be merged into Linux? 2) The current thinking in the kernel community is that we'll converge on virtio based paravirtual devices. I haven't looked at the VMware transports deeply enough to know yet whether they would fit nicely into the virtio model but my suspicion is that they could. Is there interest in this? 3) The last time I checked, the EULA around the VMTools for other platforms (namely, Windows), prevented the use of the drivers in other VMMs. That limits the utility of implementing these interfaces in other VMMs. Are there any plans to adjust the EULA for other platforms? Regards, Anthony Liguori |
From: SourceForge.net <no...@so...> - 2007-09-10 23:44:17
|
Tracker item #1789763, was opened at 2007-09-06 22:53 Message generated for change (Settings changed) made by sopwith You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=989708&aid=1789763&group_id=204462 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: misc Group: None >Status: Deleted Resolution: None Priority: 5 Private: No Submitted By: ECL (sopwith) Assigned to: Nobody/Anonymous (nobody) Summary: testing Initial Comment: This is a test bug to see how to configure e-mail things... ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:08 Message: Logged In: YES user_id=18318 Originator: YES Ahah, I bet it will work now. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:05 Message: Logged In: YES user_id=18318 Originator: YES Testing again, with the correct noreply address. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:02 Message: Logged In: YES user_id=18318 Originator: YES And yet another test with noreply not being a member. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:01 Message: Logged In: YES user_id=18318 Originator: YES Another test. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 17:56 Message: Logged In: YES user_id=18318 Originator: YES Testing e-mail connections. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=989708&aid=1789763&group_id=204462 |
From: SourceForge.net <no...@so...> - 2007-09-07 18:08:15
|
Tracker item #1789763, was opened at 2007-09-06 22:53 Message generated for change (Comment added) made by sopwith You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=989708&aid=1789763&group_id=204462 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: misc Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: ECL (sopwith) Assigned to: Nobody/Anonymous (nobody) Summary: testing Initial Comment: This is a test bug to see how to configure e-mail things... ---------------------------------------------------------------------- >Comment By: ECL (sopwith) Date: 2007-09-07 18:08 Message: Logged In: YES user_id=18318 Originator: YES Ahah, I bet it will work now. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:05 Message: Logged In: YES user_id=18318 Originator: YES Testing again, with the correct noreply address. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:02 Message: Logged In: YES user_id=18318 Originator: YES And yet another test with noreply not being a member. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 18:01 Message: Logged In: YES user_id=18318 Originator: YES Another test. ---------------------------------------------------------------------- Comment By: ECL (sopwith) Date: 2007-09-07 17:56 Message: Logged In: YES user_id=18318 Originator: YES Testing e-mail connections. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=989708&aid=1789763&group_id=204462 |
From: SourceForge.net <no...@so...> - 2007-09-07 17:56:39
|
Tracker item #1789763, was opened at 2007-09-06 22:53 Message generated for change (Comment added) made by sopwith You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=989708&aid=1789763&group_id=204462 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: misc Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: ECL (sopwith) Assigned to: Nobody/Anonymous (nobody) Summary: testing Initial Comment: This is a test bug to see how to configure e-mail things... ---------------------------------------------------------------------- >Comment By: ECL (sopwith) Date: 2007-09-07 17:56 Message: Logged In: YES user_id=18318 Originator: YES Testing e-mail connections. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=989708&aid=1789763&group_id=204462 |