* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
[not found] <20090803171030.17268.26962.stgit@dev.haskins.net>
@ 2009-08-06 8:19 ` Michael S. Tsirkin
2009-08-06 10:17 ` Michael S. Tsirkin
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: Michael S. Tsirkin @ 2009-08-06 8:19 UTC (permalink / raw)
To: Gregory Haskins; +Cc: linux-kernel, alacrityvm-devel, netdev, kvm
On Mon, Aug 03, 2009 at 01:17:30PM -0400, Gregory Haskins wrote:
> (Applies to v2.6.31-rc5, proposed for linux-next after review is complete)
These are guest drivers, right? Merging the guest first means relying on
kernel interface from an out of tree driver, which well might change
before it goes in. Would it make more sense to start merging with the
host side of the project?
> This series implements the guest-side drivers for accelerated IO
> when running on top of the AlacrityVM hypervisor, the details of
> which you can find here:
>
> http://developer.novell.com/wiki/index.php/AlacrityVM
Since AlacrityVM is kvm based, Cc kvm@vger.kernel.org.
> This series includes the basic plumbing, as well as the driver for
> accelerated 802.x (ethernet) networking.
The graphs comparing virtio with vbus look interesting.
However, they do not compare apples to apples, do they?
These compare userspace virtio with kernel vbus, where for
apples to apples comparison one would need to compare
kernel virtio with kernel vbus. Right?
> Regards,
> -Greg
>
> ---
>
> Gregory Haskins (7):
> venet: add scatter-gather/GSO support
> net: Add vbus_enet driver
> ioq: add driver-side vbus helpers
> vbus-proxy: add a pci-to-vbus bridge
> vbus: add a "vbus-proxy" bus model for vbus_driver objects
> ioq: Add basic definitions for a shared-memory, lockless queue
> shm-signal: shared-memory signals
>
>
> arch/x86/Kconfig | 2
> drivers/Makefile | 1
> drivers/net/Kconfig | 14 +
> drivers/net/Makefile | 1
> drivers/net/vbus-enet.c | 899 +++++++++++++++++++++++++++++++++++++++++++
> drivers/vbus/Kconfig | 24 +
> drivers/vbus/Makefile | 6
> drivers/vbus/bus-proxy.c | 216 ++++++++++
> drivers/vbus/pci-bridge.c | 824 +++++++++++++++++++++++++++++++++++++++
> include/linux/Kbuild | 4
> include/linux/ioq.h | 415 ++++++++++++++++++++
> include/linux/shm_signal.h | 189 +++++++++
> include/linux/vbus_driver.h | 80 ++++
> include/linux/vbus_pci.h | 127 ++++++
> include/linux/venet.h | 84 ++++
> lib/Kconfig | 21 +
> lib/Makefile | 2
> lib/ioq.c | 294 ++++++++++++++
> lib/shm_signal.c | 192 +++++++++
> 19 files changed, 3395 insertions(+), 0 deletions(-)
> create mode 100644 drivers/net/vbus-enet.c
> create mode 100644 drivers/vbus/Kconfig
> create mode 100644 drivers/vbus/Makefile
> create mode 100644 drivers/vbus/bus-proxy.c
> create mode 100644 drivers/vbus/pci-bridge.c
> create mode 100644 include/linux/ioq.h
> create mode 100644 include/linux/shm_signal.h
> create mode 100644 include/linux/vbus_driver.h
> create mode 100644 include/linux/vbus_pci.h
> create mode 100644 include/linux/venet.h
> create mode 100644 lib/ioq.c
> create mode 100644 lib/shm_signal.c
>
> --
> Signature
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 8:19 ` [PATCH 0/7] AlacrityVM guest drivers Reply-To: Michael S. Tsirkin
@ 2009-08-06 10:17 ` Michael S. Tsirkin
2009-08-06 12:09 ` Gregory Haskins
2009-08-06 12:08 ` Gregory Haskins
2009-08-07 14:19 ` Anthony Liguori
2 siblings, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2009-08-06 10:17 UTC (permalink / raw)
To: Gregory Haskins; +Cc: linux-kernel, alacrityvm-devel, netdev, kvm
On Thu, Aug 06, 2009 at 11:19:56AM +0300, Michael S. Tsirkin wrote:
> On Mon, Aug 03, 2009 at 01:17:30PM -0400, Gregory Haskins wrote:
> > (Applies to v2.6.31-rc5, proposed for linux-next after review is complete)
>
> These are guest drivers, right? Merging the guest first means relying on
> kernel interface from an out of tree driver, which well might change
> before it goes in. Would it make more sense to start merging with the
> host side of the project?
>
> > This series implements the guest-side drivers for accelerated IO
> > when running on top of the AlacrityVM hypervisor, the details of
> > which you can find here:
> >
> > http://developer.novell.com/wiki/index.php/AlacrityVM
>
> Since AlacrityVM is kvm based, Cc kvm@vger.kernel.org.
>
> > This series includes the basic plumbing, as well as the driver for
> > accelerated 802.x (ethernet) networking.
>
> The graphs comparing virtio with vbus look interesting.
> However, they do not compare apples to apples, do they?
> These compare userspace virtio with kernel vbus, where for
> apples to apples comparison one would need to compare
> kernel virtio with kernel vbus. Right?
Or userspace virtio with userspace vbus.
> > Regards,
> > -Greg
> >
> > ---
> >
> > Gregory Haskins (7):
> > venet: add scatter-gather/GSO support
> > net: Add vbus_enet driver
> > ioq: add driver-side vbus helpers
> > vbus-proxy: add a pci-to-vbus bridge
> > vbus: add a "vbus-proxy" bus model for vbus_driver objects
> > ioq: Add basic definitions for a shared-memory, lockless queue
> > shm-signal: shared-memory signals
> >
> >
> > arch/x86/Kconfig | 2
> > drivers/Makefile | 1
> > drivers/net/Kconfig | 14 +
> > drivers/net/Makefile | 1
> > drivers/net/vbus-enet.c | 899 +++++++++++++++++++++++++++++++++++++++++++
> > drivers/vbus/Kconfig | 24 +
> > drivers/vbus/Makefile | 6
> > drivers/vbus/bus-proxy.c | 216 ++++++++++
> > drivers/vbus/pci-bridge.c | 824 +++++++++++++++++++++++++++++++++++++++
> > include/linux/Kbuild | 4
> > include/linux/ioq.h | 415 ++++++++++++++++++++
> > include/linux/shm_signal.h | 189 +++++++++
> > include/linux/vbus_driver.h | 80 ++++
> > include/linux/vbus_pci.h | 127 ++++++
> > include/linux/venet.h | 84 ++++
> > lib/Kconfig | 21 +
> > lib/Makefile | 2
> > lib/ioq.c | 294 ++++++++++++++
> > lib/shm_signal.c | 192 +++++++++
> > 19 files changed, 3395 insertions(+), 0 deletions(-)
> > create mode 100644 drivers/net/vbus-enet.c
> > create mode 100644 drivers/vbus/Kconfig
> > create mode 100644 drivers/vbus/Makefile
> > create mode 100644 drivers/vbus/bus-proxy.c
> > create mode 100644 drivers/vbus/pci-bridge.c
> > create mode 100644 include/linux/ioq.h
> > create mode 100644 include/linux/shm_signal.h
> > create mode 100644 include/linux/vbus_driver.h
> > create mode 100644 include/linux/vbus_pci.h
> > create mode 100644 include/linux/venet.h
> > create mode 100644 lib/ioq.c
> > create mode 100644 lib/shm_signal.c
> >
> > --
> > Signature
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 8:19 ` [PATCH 0/7] AlacrityVM guest drivers Reply-To: Michael S. Tsirkin
2009-08-06 10:17 ` Michael S. Tsirkin
@ 2009-08-06 12:08 ` Gregory Haskins
2009-08-06 12:24 ` Michael S. Tsirkin
2009-08-06 12:54 ` Avi Kivity
2009-08-07 14:19 ` Anthony Liguori
2 siblings, 2 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 12:08 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: alacrityvm-devel, kvm, linux-kernel, netdev
Hi Michael,
>>> On 8/6/2009 at 4:19 AM, in message <20090806081955.GA9752@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Mon, Aug 03, 2009 at 01:17:30PM -0400, Gregory Haskins wrote:
>> (Applies to v2.6.31-rc5, proposed for linux-next after review is complete)
>
> These are guest drivers, right?
Yep.
> Merging the guest first means relying on
> kernel interface from an out of tree driver, which well might change
> before it goes in.
ABI compatibility is already addressed/handled, so even if that is true its not a problem.
> Would it make more sense to start merging with the host side of the project?
Not necessarily, no. These are drivers for a "device", so its no different than merging any other driver really. This is especially true since the hypervisor is also already published and freely available today, so anyone can start using it.
>
>> This series implements the guest-side drivers for accelerated IO
>> when running on top of the AlacrityVM hypervisor, the details of
>> which you can find here:
>>
>> http://developer.novell.com/wiki/index.php/AlacrityVM
>
> Since AlacrityVM is kvm based, Cc kvm@vger.kernel.org.
I *can* do that, but there is nothing in these drivers that is KVM specific (its all pure PCI and VBUS). I've already made the general announcement about the project/ml cross posted to KVM for anyone that might be interested, but I figure I will spare the general KVM list the details unless something specifically pertains to, or affects, KVM. For instance, when I get to pushing the hypervisor side, I still need to work on getting that 'xinterface' patch to you guys. I would certainly be CC'ing kvm@vger when that happens since it modifies the KVM code.
So instead, I would just encourage anyone interested (such as yourself) to join the alacrity list so I don't bother the KVM community unless absolutely necessary.
>
>> This series includes the basic plumbing, as well as the driver for
>> accelerated 802.x (ethernet) networking.
>
> The graphs comparing virtio with vbus look interesting.
> However, they do not compare apples to apples, do they?
Yes, I believe they do. They represent the best that KVM has to offer (to my knowledge) vs the best that alacrityvm has to offer.
> These compare userspace virtio with kernel vbus,
vbus is a device model (akin to QEMU's device model). Technically, it was a comparison of userspace virtio-net (via QEMU), to kernel venet (via vbus),
which I again stress is the state of the art for both to my knowledge.
As I have explained before in earlier threads on kvm@vger, virtio is not mutually exclusive here. You can run the virtio protocol over the vbus model if someone were so inclined. In fact, I proposed this very idea to you a month or two ago but I believe you decided to go your own way and reinvent some other in-kernel model instead for your own reasons.
>where for apples to apples comparison one would need to compare
> kernel virtio with kernel vbus. Right?
Again, it already *is* apples to apples as far as I am concerned.
At the time I ran those numbers, there was certainly no in-kernel virtio model to play with. And to my knowledge, there isn't one now (I was never CC'd on the patches, and a cursory search of the KVM list isn't revealing one that was posted recently).
To reiterate: kernel virtio-net (using ??) to kernel venet (vbus based) to kernel virtio-net (vbus, but doesnt exist yet) would be a fun bakeoff. If you have something for the kernel virtio-net, point me at it and I will try to include it in the comparison next time.
Kind Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 10:17 ` Michael S. Tsirkin
@ 2009-08-06 12:09 ` Gregory Haskins
0 siblings, 0 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 12:09 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: alacrityvm-devel, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 6:17 AM, in message <20090806101702.GA10605@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Aug 06, 2009 at 11:19:56AM +0300, Michael S. Tsirkin wrote:
>> On Mon, Aug 03, 2009 at 01:17:30PM -0400, Gregory Haskins wrote:
>> > (Applies to v2.6.31-rc5, proposed for linux-next after review is complete)
>>
>> These are guest drivers, right? Merging the guest first means relying on
>> kernel interface from an out of tree driver, which well might change
>> before it goes in. Would it make more sense to start merging with the
>> host side of the project?
>>
>> > This series implements the guest-side drivers for accelerated IO
>> > when running on top of the AlacrityVM hypervisor, the details of
>> > which you can find here:
>> >
>> > http://developer.novell.com/wiki/index.php/AlacrityVM
>>
>> Since AlacrityVM is kvm based, Cc kvm@vger.kernel.org.
>>
>> > This series includes the basic plumbing, as well as the driver for
>> > accelerated 802.x (ethernet) networking.
>>
>> The graphs comparing virtio with vbus look interesting.
>> However, they do not compare apples to apples, do they?
>> These compare userspace virtio with kernel vbus, where for
>> apples to apples comparison one would need to compare
>> kernel virtio with kernel vbus. Right?
>
> Or userspace virtio with userspace vbus.
Note: That would be pointless.
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 12:08 ` Gregory Haskins
@ 2009-08-06 12:24 ` Michael S. Tsirkin
2009-08-06 13:00 ` Gregory Haskins
2009-08-06 12:54 ` Avi Kivity
1 sibling, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2009-08-06 12:24 UTC (permalink / raw)
To: Gregory Haskins; +Cc: alacrityvm-devel, kvm, linux-kernel, netdev
On Thu, Aug 06, 2009 at 06:08:27AM -0600, Gregory Haskins wrote:
> Hi Michael,
>
> >>> On 8/6/2009 at 4:19 AM, in message <20090806081955.GA9752@redhat.com>,
> "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Mon, Aug 03, 2009 at 01:17:30PM -0400, Gregory Haskins wrote:
> >> (Applies to v2.6.31-rc5, proposed for linux-next after review is complete)
> >
> > These are guest drivers, right?
>
> Yep.
>
> > Merging the guest first means relying on
> > kernel interface from an out of tree driver, which well might change
> > before it goes in.
>
> ABI compatibility is already addressed/handled, so even if that is true its not a problem.
It is? With versioning? Presumably this:
+ params.devid = vdev->id;
+ params.version = version;
+
+ ret = vbus_pci_hypercall(VBUS_PCI_HC_DEVOPEN,
+ ¶ms, sizeof(params));
+ if (ret < 0)
+ return ret;
Even assuming host even knows how to decode this structure (e.g. some
other host module doesn't use VBUS_PCI_HC_DEVOPEN), checks the version
and denies older guests, this might help guest not to crash, but guest
still won't work.
> > Would it make more sense to start merging with the host side of the project?
>
> Not necessarily, no. These are drivers for a "device", so its no
> different than merging any other driver really. This is especially
> true since the hypervisor is also already published and freely
> available today, so anyone can start using it.
The difference is clear to me: devices do not get to set kernel/userspace
interfaces. This "device" depends on a specific interface between
kernel and (guest) userspace.
--
MST
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 12:08 ` Gregory Haskins
2009-08-06 12:24 ` Michael S. Tsirkin
@ 2009-08-06 12:54 ` Avi Kivity
2009-08-06 13:03 ` Gregory Haskins
1 sibling, 1 reply; 27+ messages in thread
From: Avi Kivity @ 2009-08-06 12:54 UTC (permalink / raw)
To: Gregory Haskins
Cc: Michael S. Tsirkin, alacrityvm-devel, kvm, linux-kernel, netdev
On 08/06/2009 03:08 PM, Gregory Haskins wrote:
>> Merging the guest first means relying on
>> kernel interface from an out of tree driver, which well might change
>> before it goes in.
>>
>
> ABI compatibility is already addressed/handled, so even if that is true its not a problem.
>
>
Really the correct way to address the ABI is to publish a spec and write
both host and guest drivers to that. Unfortunately we didn't do this
with virtio.
It becomes more important when you have multiple implementations (e.g.
Windows drivers).
>>> This series implements the guest-side drivers for accelerated IO
>>> when running on top of the AlacrityVM hypervisor, the details of
>>> which you can find here:
>>>
>>> http://developer.novell.com/wiki/index.php/AlacrityVM
>>>
>> Since AlacrityVM is kvm based, Cc kvm@vger.kernel.org.
>>
>
> I *can* do that, but there is nothing in these drivers that is KVM specific (its all pure PCI and VBUS). I've already made the general announcement about the project/ml cross posted to KVM for anyone that might be interested, but I figure I will spare the general KVM list the details unless something specifically pertains to, or affects, KVM. For instance, when I get to pushing the hypervisor side, I still need to work on getting that 'xinterface' patch to you guys. I would certainly be CC'ing kvm@vger when that happens since it modifies the KVM code.
>
> So instead, I would just encourage anyone interested (such as yourself) to join the alacrity list so I don't bother the KVM community unless absolutely necessary.
>
It's true that vbus is a separate project (in fact even virtio is
completely separate from kvm). Still I think it would be of interest to
many kvm@ readers.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 12:24 ` Michael S. Tsirkin
@ 2009-08-06 13:00 ` Gregory Haskins
0 siblings, 0 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 13:00 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: alacrityvm-devel, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 8:24 AM, in message <20090806122449.GC11038@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Aug 06, 2009 at 06:08:27AM -0600, Gregory Haskins wrote:
>> Hi Michael,
>>
>> >>> On 8/6/2009 at 4:19 AM, in message <20090806081955.GA9752@redhat.com>,
>> "Michael S. Tsirkin" <mst@redhat.com> wrote:
>> > On Mon, Aug 03, 2009 at 01:17:30PM -0400, Gregory Haskins wrote:
>> >> (Applies to v2.6.31-rc5, proposed for linux-next after review is complete)
>> >
>> > These are guest drivers, right?
>>
>> Yep.
>>
>> > Merging the guest first means relying on
>> > kernel interface from an out of tree driver, which well might change
>> > before it goes in.
>>
>> ABI compatibility is already addressed/handled, so even if that is true its
> not a problem.
>
> It is? With versioning? Presumably this:
>
> + params.devid = vdev->id;
> + params.version = version;
> +
> + ret = vbus_pci_hypercall(VBUS_PCI_HC_DEVOPEN,
> + ¶ms, sizeof(params));
> + if (ret < 0)
> + return ret;
This is part of it. There are various ABI version components (which, by the way, are only expected to only allow change while the code is experimental/alpha). The other component is capability functions (such as NEGCAP in the venet driver).
>
> Even assuming host even knows how to decode this structure (e.g. some
> other host module doesn't use VBUS_PCI_HC_DEVOPEN),
This argument demonstrates a fundamental lack of understanding on how AlacrityVM works. Please study the code more closely and you will see that your concern is illogical. If it's still not clear, let me know and I will walk it through for you.
> checks the version
> and denies older guests, this might help guest not to crash, but guest
> still won't work.
Thats ok. As I said above, the version number is just there for gross ABI protection and generally will never be changed once a driver is "official" (if at all). We use things like capability-bit negotiation to allow backwards compat.
For an example, see drivers/net/vbus-enet.c, line 703:
http://git.kernel.org/?p=linux/kernel/git/ghaskins/alacrityvm/linux-2.6.git;a=blob;f=drivers/net/vbus-enet.c;h=7220f43723adc5b0bece1bc37974fae1b034cd9e;hb=b3b2339efbd4e754b1c85f8bc8f85f21a1a1f509#l703
venet exposes a verb "NEGCAP" (negotiate capabilities), which is used to extend the ABI. The version number you quote above (on the device open) is really just a check to make sure the NEGCAP ABI is compatible. The rest of the abi is negotiated at runtime with capability feature bits.
FWIW; I decided to not built a per-device capability into the low-level vbus protocol (e.g. there is no VBUS_PCI_HC_NEGCAP) because I felt as though the individual devices could better express their own capability mechanism, rather than try to generalize it. Therefore it is up to each device to define its own mechanism, presumably using a verb from its own private call() namespace (as venet has done).
>
>> > Would it make more sense to start merging with the host side of the
> project?
>>
>> Not necessarily, no. These are drivers for a "device", so its no
>> different than merging any other driver really. This is especially
>> true since the hypervisor is also already published and freely
>> available today, so anyone can start using it.
>
> The difference is clear to me: devices do not get to set kernel/userspace
> interfaces. This "device" depends on a specific interface between
> kernel and (guest) userspace.
This doesn't really parse for me, but I think the gist of it is based on an incorrect assumption.
Can you elaborate?
Kind Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 12:54 ` Avi Kivity
@ 2009-08-06 13:03 ` Gregory Haskins
2009-08-06 13:44 ` Avi Kivity
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 13:03 UTC (permalink / raw)
To: Avi Kivity
Cc: alacrityvm-devel, Michael S. Tsirkin, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 8:54 AM, in message <4A7AD29E.50800@redhat.com>, Avi Kivity
<avi@redhat.com> wrote:
> On 08/06/2009 03:08 PM, Gregory Haskins wrote:
>>> Merging the guest first means relying on
>>> kernel interface from an out of tree driver, which well might change
>>> before it goes in.
>>>
>>
>> ABI compatibility is already addressed/handled, so even if that is true its
> not a problem.
>>
>>
>
> Really the correct way to address the ABI is to publish a spec and write
> both host and guest drivers to that. Unfortunately we didn't do this
> with virtio.
>
> It becomes more important when you have multiple implementations (e.g.
> Windows drivers).
>
>>>> This series implements the guest-side drivers for accelerated IO
>>>> when running on top of the AlacrityVM hypervisor, the details of
>>>> which you can find here:
>>>>
>>>> http://developer.novell.com/wiki/index.php/AlacrityVM
>>>>
>>> Since AlacrityVM is kvm based, Cc kvm@vger.kernel.org.
>>>
>>
>> I *can* do that, but there is nothing in these drivers that is KVM specific
> (its all pure PCI and VBUS). I've already made the general announcement
> about the project/ml cross posted to KVM for anyone that might be interested,
> but I figure I will spare the general KVM list the details unless something
> specifically pertains to, or affects, KVM. For instance, when I get to
> pushing the hypervisor side, I still need to work on getting that
> 'xinterface' patch to you guys. I would certainly be CC'ing kvm@vger when
> that happens since it modifies the KVM code.
>>
>> So instead, I would just encourage anyone interested (such as yourself) to
> join the alacrity list so I don't bother the KVM community unless absolutely
> necessary.
>>
>
> It's true that vbus is a separate project (in fact even virtio is
> completely separate from kvm). Still I think it would be of interest to
> many kvm@ readers.
Well, my goal was to not annoy KVM readers. ;) So if you feel as though there is benefit to having all of KVM CC'd and I won't be annoying everyone, I see no problem in cross posting.
Would you like to see all conversations, or just ones related to code (and, of course, KVM relevant items)?
Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 13:03 ` Gregory Haskins
@ 2009-08-06 13:44 ` Avi Kivity
2009-08-06 13:45 ` Gregory Haskins
0 siblings, 1 reply; 27+ messages in thread
From: Avi Kivity @ 2009-08-06 13:44 UTC (permalink / raw)
To: Gregory Haskins
Cc: alacrityvm-devel, Michael S. Tsirkin, kvm, linux-kernel, netdev
On 08/06/2009 04:03 PM, Gregory Haskins wrote:
>> It's true that vbus is a separate project (in fact even virtio is
>> completely separate from kvm). Still I think it would be of interest to
>> many kvm@ readers.
>>
>
> Well, my goal was to not annoy KVM readers. ;) So if you feel as though there is benefit to having all of KVM CC'd and I won't be annoying everyone, I see no problem in cross posting.
>
I can only speak for myself, I'm interested in this project (though
still rooting for virtio).
> Would you like to see all conversations, or just ones related to code (and, of course, KVM relevant items)
I guess internal vbus changes won't be too interesting for most readers,
but new releases, benchmarks, and kvm-related stuff will be welcome on
the kvm list.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 13:44 ` Avi Kivity
@ 2009-08-06 13:45 ` Gregory Haskins
2009-08-06 13:57 ` Avi Kivity
2009-08-06 13:59 ` Michael S. Tsirkin
0 siblings, 2 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 13:45 UTC (permalink / raw)
To: Avi Kivity
Cc: alacrityvm-devel, Michael S. Tsirkin, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 9:44 AM, in message <4A7ADE23.5010208@redhat.com>, Avi Kivity
<avi@redhat.com> wrote:
> On 08/06/2009 04:03 PM, Gregory Haskins wrote:
>>> It's true that vbus is a separate project (in fact even virtio is
>>> completely separate from kvm). Still I think it would be of interest to
>>> many kvm@ readers.
>>>
>>
>> Well, my goal was to not annoy KVM readers. ;) So if you feel as though
> there is benefit to having all of KVM CC'd and I won't be annoying everyone,
> I see no problem in cross posting.
>>
>
> I can only speak for myself, I'm interested in this project
In that case, the best solution is probably to have you (and anyone else interested) to sign up, then:
https://lists.sourceforge.net/lists/listinfo/alacrityvm-devel
https://lists.sourceforge.net/lists/listinfo/alacrityvm-users
> (though still rooting for virtio).
Heh...not to belabor the point to death, but virtio is orthogonal (you keep forgetting that ;).
Its really the vbus device-model vs the qemu device-model (and possibly vs the "in-kernel pci emulation" model that I believe Michael is working on).
You can run virtio on any of those three.
>
>> Would you like to see all conversations, or just ones related to code (and,
> of course, KVM relevant items)
>
> I guess internal vbus changes won't be too interesting for most readers,
> but new releases, benchmarks, and kvm-related stuff will be welcome on
> the kvm list.
Ok, I was planning on that anyway.
Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 13:45 ` Gregory Haskins
@ 2009-08-06 13:57 ` Avi Kivity
2009-08-06 14:06 ` Gregory Haskins
2009-08-06 13:59 ` Michael S. Tsirkin
1 sibling, 1 reply; 27+ messages in thread
From: Avi Kivity @ 2009-08-06 13:57 UTC (permalink / raw)
To: Gregory Haskins
Cc: alacrityvm-devel, Michael S. Tsirkin, kvm, linux-kernel, netdev
On 08/06/2009 04:45 PM, Gregory Haskins wrote:
>
>> (though still rooting for virtio).
>>
>
> Heh...not to belabor the point to death, but virtio is orthogonal (you keep forgetting that ;).
>
> Its really the vbus device-model vs the qemu device-model (and possibly vs the "in-kernel pci emulation" model that I believe Michael is working on).
>
> You can run virtio on any of those three.
>
It's not orthogonal. virtio is one set of ABI+guest drivers+host
support to get networking on kvm guests. AlacrityVM's vbus-based
drivers are another set of ABI+guest drivers+host support to get
networking on kvm guests. That makes them competitors (two different
ways to do one thing), not orthogonal.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 13:45 ` Gregory Haskins
2009-08-06 13:57 ` Avi Kivity
@ 2009-08-06 13:59 ` Michael S. Tsirkin
2009-08-06 14:07 ` Gregory Haskins
1 sibling, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2009-08-06 13:59 UTC (permalink / raw)
To: Gregory Haskins; +Cc: Avi Kivity, alacrityvm-devel, kvm, linux-kernel, netdev
On Thu, Aug 06, 2009 at 07:45:30AM -0600, Gregory Haskins wrote:
> > (though still rooting for virtio).
>
> Heh...not to belabor the point to death, but virtio is orthogonal (you keep forgetting that ;).
venet and virtio aren't orthogonal, are they?
--
MST
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 13:57 ` Avi Kivity
@ 2009-08-06 14:06 ` Gregory Haskins
2009-08-06 15:40 ` Arnd Bergmann
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 14:06 UTC (permalink / raw)
To: Avi Kivity
Cc: alacrityvm-devel, Michael S. Tsirkin, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 9:57 AM, in message <4A7AE150.7040009@redhat.com>, Avi Kivity
<avi@redhat.com> wrote:
> On 08/06/2009 04:45 PM, Gregory Haskins wrote:
>>
>>> (though still rooting for virtio).
>>>
>>
>> Heh...not to belabor the point to death, but virtio is orthogonal (you keep
> forgetting that ;).
>>
>> Its really the vbus device-model vs the qemu device-model (and possibly vs the
> "in-kernel pci emulation" model that I believe Michael is working on).
>>
>> You can run virtio on any of those three.
>>
>
> It's not orthogonal. virtio is one set of ABI+guest drivers+host
> support to get networking on kvm guests. AlacrityVM's vbus-based
> drivers are another set of ABI+guest drivers+host support to get
> networking on kvm guests. That makes them competitors (two different
> ways to do one thing), not orthogonal.
Thats not accurate, though.
The virtio stack is modular. For instance, with virtio-net, you have
(guest-side)
|--------------------------
| virtio-net
|--------------------------
| virtio-ring
|--------------------------
| virtio-bus
|--------------------------
| virtio-pci
|--------------------------
|
(pci)
|
|--------------------------
| kvm.ko
|--------------------------
| qemu
|--------------------------
| tun-tap
|--------------------------
| netif
|--------------------------
(host-side)
We can exchange out the "virtio-pci" module like this:
(guest-side)
|--------------------------
| virtio-net
|--------------------------
| virtio-ring
|--------------------------
| virtio-bus
|--------------------------
| virtio-vbus
|--------------------------
| vbus-proxy
|--------------------------
| vbus-connector
|--------------------------
|
(vbus)
|
|--------------------------
| kvm.ko
|--------------------------
| vbus-connector
|--------------------------
| vbus
|--------------------------
| virtio-net-tap (vbus model)
|--------------------------
| netif
|--------------------------
(host-side)
So virtio-net runs unmodified. What is "competing" here is "virtio-pci" vs "virtio-vbus". Also, venet vs virtio-net are technically competing. But to say "virtio vs vbus" is inaccurate, IMO.
HTH
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 13:59 ` Michael S. Tsirkin
@ 2009-08-06 14:07 ` Gregory Haskins
0 siblings, 0 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 14:07 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: alacrityvm-devel, Avi Kivity, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 9:59 AM, in message <20090806135903.GA11530@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Aug 06, 2009 at 07:45:30AM -0600, Gregory Haskins wrote:
>> > (though still rooting for virtio).
>>
>> Heh...not to belabor the point to death, but virtio is orthogonal (you keep
> forgetting that ;).
>
> venet and virtio aren't orthogonal, are they?
See my last reply to Avi.
Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 14:06 ` Gregory Haskins
@ 2009-08-06 15:40 ` Arnd Bergmann
2009-08-06 15:45 ` Michael S. Tsirkin
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: Arnd Bergmann @ 2009-08-06 15:40 UTC (permalink / raw)
To: Gregory Haskins
Cc: Avi Kivity, alacrityvm-devel, Michael S. Tsirkin, kvm,
linux-kernel, netdev
On Thursday 06 August 2009, Gregory Haskins wrote:
> We can exchange out the "virtio-pci" module like this:
>
> (guest-side)
> |--------------------------
> | virtio-net
> |--------------------------
> | virtio-ring
> |--------------------------
> | virtio-bus
> |--------------------------
> | virtio-vbus
> |--------------------------
> | vbus-proxy
> |--------------------------
> | vbus-connector
> |--------------------------
> |
> (vbus)
> |
> |--------------------------
> | kvm.ko
> |--------------------------
> | vbus-connector
> |--------------------------
> | vbus
> |--------------------------
> | virtio-net-tap (vbus model)
> |--------------------------
> | netif
> |--------------------------
> (host-side)
>
>
> So virtio-net runs unmodified. What is "competing" here is "virtio-pci" vs "virtio-vbus".
> Also, venet vs virtio-net are technically competing. But to say "virtio vs vbus" is inaccurate, IMO.
I think what's confusing everyone is that you are competing on multiple
issues:
1. Implementation of bus probing: both vbus and virtio are backed by
PCI devices and can be backed by something else (e.g. virtio by lguest
or even by vbus).
2. Exchange of metadata: virtio uses a config space, vbus uses devcall
to do the same.
3. User data transport: virtio has virtqueues, vbus has shm/ioq.
I think these three are the main differences, and the venet vs. virtio-net
question comes down to which interface the drivers use for each aspect. Do
you agree with this interpretation?
Now to draw conclusions from each of these is of course highly subjective,
but this is how I view it:
1. The bus probing is roughly equivalent, they both work and the
virtio method seems to need a little less code but that could be fixed
by slimming down the vbus code as I mentioned in my comments on the
pci-to-vbus bridge code. However, I would much prefer not to have both
of them, and virtio came first.
2. the two methods (devcall/config space) are more or less equivalent
and you should be able to implement each one through the other one. The
virtio design was driven by making it look similar to PCI, the vbus
design was driven by making it easy to implement in a host kernel. I
don't care too much about these, as they can probably coexist without
causing any trouble. For a (hypothetical) vbus-in-virtio device,
a devcall can be a config-set/config-get pair, for a virtio-in-vbus,
you can do a config-get and a config-set devcall and be happy. Each
could be done in a trivial helper library.
3. The ioq method seems to be the real core of your work that makes
venet perform better than virtio-net with its virtqueues. I don't see
any reason to doubt that your claim is correct. My conclusion from
this would be to add support for ioq to virtio devices, alongside
virtqueues, but to leave out the extra bus_type and probing method.
Arnd <><
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 15:40 ` Arnd Bergmann
@ 2009-08-06 15:45 ` Michael S. Tsirkin
2009-08-06 16:28 ` Pantelis Koukousoulas
2009-08-06 15:50 ` Avi Kivity
2009-08-06 16:29 ` Gregory Haskins
2 siblings, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2009-08-06 15:45 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Gregory Haskins, Avi Kivity, alacrityvm-devel, kvm, linux-kernel,
netdev
On Thu, Aug 06, 2009 at 05:40:04PM +0200, Arnd Bergmann wrote:
> 3. The ioq method seems to be the real core of your work that makes
> venet perform better than virtio-net with its virtqueues. I don't see
> any reason to doubt that your claim is correct. My conclusion from
> this would be to add support for ioq to virtio devices, alongside
> virtqueues, but to leave out the extra bus_type and probing method.
>
> Arnd <><
The fact that it's in kernel also likely contributes.
--
MST
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 15:40 ` Arnd Bergmann
2009-08-06 15:45 ` Michael S. Tsirkin
@ 2009-08-06 15:50 ` Avi Kivity
2009-08-06 16:55 ` Gregory Haskins
2009-08-06 16:29 ` Gregory Haskins
2 siblings, 1 reply; 27+ messages in thread
From: Avi Kivity @ 2009-08-06 15:50 UTC (permalink / raw)
To: Arnd Bergmann
Cc: Gregory Haskins, alacrityvm-devel, Michael S. Tsirkin, kvm,
linux-kernel, netdev
On 08/06/2009 06:40 PM, Arnd Bergmann wrote:
> 3. The ioq method seems to be the real core of your work that makes
> venet perform better than virtio-net with its virtqueues. I don't see
> any reason to doubt that your claim is correct. My conclusion from
> this would be to add support for ioq to virtio devices, alongside
> virtqueues, but to leave out the extra bus_type and probing method.
>
The current conjecture is that ioq outperforms virtio because the host
side of ioq is implemented in the host kernel, while the host side of
virtio is implemented in userspace. AFAIK, no one pointed out
differences in the protocol which explain the differences in performance.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 15:45 ` Michael S. Tsirkin
@ 2009-08-06 16:28 ` Pantelis Koukousoulas
2009-08-07 12:14 ` Gregory Haskins
0 siblings, 1 reply; 27+ messages in thread
From: Pantelis Koukousoulas @ 2009-08-06 16:28 UTC (permalink / raw)
To: kvm
How hard would it be to implement virtio over vbus and perhaps the
virtio-net backend?
This would leave only one variable in the comparison, clear misconceptions and
make evaluation easier by judging each of vbus, venet etc separately on its own
merits.
The way things are now, it is unclear exactly where those performance
improvements are coming from (or how much each component contributes)
because there are too many variables.
Replacing virtio-net by venet would be a hard proposition if only because
virtio-net has (closed source) windows drivers available. There has to be
shown that venet by itself does something significantly better that
virtio-net can't be modified to do comparably well.
Having venet in addition to virtio-net is also difficult, given that having only
one set of paravirtual drivers in the kernel was the whole point behind virtio.
Just a user's 0.02,
Pantelis
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 15:40 ` Arnd Bergmann
2009-08-06 15:45 ` Michael S. Tsirkin
2009-08-06 15:50 ` Avi Kivity
@ 2009-08-06 16:29 ` Gregory Haskins
2009-08-06 23:23 ` Ira W. Snyder
2 siblings, 1 reply; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 16:29 UTC (permalink / raw)
To: Arnd Bergmann
Cc: alacrityvm-devel, Avi Kivity, Michael S. Tsirkin, kvm,
linux-kernel, netdev
>>> On 8/6/2009 at 11:40 AM, in message <200908061740.04276.arnd@arndb.de>, Arnd
Bergmann <arnd@arndb.de> wrote:
> On Thursday 06 August 2009, Gregory Haskins wrote:
>> We can exchange out the "virtio-pci" module like this:
>>
>> (guest-side)
>> |--------------------------
>> | virtio-net
>> |--------------------------
>> | virtio-ring
>> |--------------------------
>> | virtio-bus
>> |--------------------------
>> | virtio-vbus
>> |--------------------------
>> | vbus-proxy
>> |--------------------------
>> | vbus-connector
>> |--------------------------
>> |
>> (vbus)
>> |
>> |--------------------------
>> | kvm.ko
>> |--------------------------
>> | vbus-connector
>> |--------------------------
>> | vbus
>> |--------------------------
>> | virtio-net-tap (vbus model)
>> |--------------------------
>> | netif
>> |--------------------------
>> (host-side)
>>
>>
>> So virtio-net runs unmodified. What is "competing" here is "virtio-pci" vs
> "virtio-vbus".
>> Also, venet vs virtio-net are technically competing. But to say "virtio vs
> vbus" is inaccurate, IMO.
>
>
> I think what's confusing everyone is that you are competing on multiple
> issues:
>
> 1. Implementation of bus probing: both vbus and virtio are backed by
> PCI devices and can be backed by something else (e.g. virtio by lguest
> or even by vbus).
More specifically, vbus-proxy and virtio-bus can be backed by modular adapters.
vbus-proxy can be backed by vbus-pcibridge (as it is in AlacrityVM). It was backed by KVM-hypercalls in previous releases, but we have deprecated/dropped that connector. Other types of connectors are possible...
virtio-bus can be backed by virtio-pci, virtio-lguest, virtio-s390, and virtio-vbus (which is backed by vbus-proxy, et. al.)
"vbus" itself is actually the host-side container technology which vbus-proxy connects to. This is an important distinction.
>
> 2. Exchange of metadata: virtio uses a config space, vbus uses devcall
> to do the same.
Sort of. You can use devcall() to implement something like config-space (and in fact, we do use it like this for some operations). But this can also be fast path (for when you need synchronous behavior).
This has various uses, such as when you need synchronous updates from non-preemptible guest code (cpupri, for instance, for -rt)
>
> 3. User data transport: virtio has virtqueues, vbus has shm/ioq.
Not quite: vbus has shm + shm-signal. You can then overlay shared-memory protocols over that, such as virtqueues, ioq, or even non-ring constructs.
I also consider the synchronous call() method to be part of the transport (tho more for niche devices, like -rt)
>
> I think these three are the main differences, and the venet vs. virtio-net
> question comes down to which interface the drivers use for each aspect. Do
> you agree with this interpretation?
>
> Now to draw conclusions from each of these is of course highly subjective,
> but this is how I view it:
>
> 1. The bus probing is roughly equivalent, they both work and the
> virtio method seems to need a little less code but that could be fixed
> by slimming down the vbus code as I mentioned in my comments on the
> pci-to-vbus bridge code. However, I would much prefer not to have both
> of them, and virtio came first.
>
> 2. the two methods (devcall/config space) are more or less equivalent
> and you should be able to implement each one through the other one. The
> virtio design was driven by making it look similar to PCI, the vbus
> design was driven by making it easy to implement in a host kernel. I
> don't care too much about these, as they can probably coexist without
> causing any trouble. For a (hypothetical) vbus-in-virtio device,
> a devcall can be a config-set/config-get pair, for a virtio-in-vbus,
> you can do a config-get and a config-set devcall and be happy. Each
> could be done in a trivial helper library.
Yep, in fact I publish something close to what I think you are talking about back in April
http://lkml.org/lkml/2009/4/21/427
>
> 3. The ioq method seems to be the real core of your work that makes
> venet perform better than virtio-net with its virtqueues. I don't see
> any reason to doubt that your claim is correct. My conclusion from
> this would be to add support for ioq to virtio devices, alongside
> virtqueues, but to leave out the extra bus_type and probing method.
While I appreciate the sentiment, I doubt that is actually whats helping here.
There are a variety of factors that I poured into venet/vbus that I think contribute to its superior performance. However, the difference in the ring design I do not think is one if them. In fact, in many ways I think Rusty's design might turn out to be faster if put side by side because he was much more careful with cacheline alignment than I was. Also note that I was careful to not pick one ring vs the other ;) They both should work.
IMO, we are only looking at the tip of the iceberg when looking at this purely as the difference between virtio-pci vs virtio-vbus, or venet vs virtio-net.
Really, the big thing I am working on here is the host side device-model. The idea here was to design a bus model that was conducive to high performance, software to software IO that would work in a variety of environments (that may or may not have PCI). KVM is one such environment, but I also have people looking at building other types of containers, and even physical systems (host+blade kind of setups).
The idea is that the "connector" is modular, and then something like virtio-net or venet "just work": in kvm, in the userspace container, on the blade system.
It provides a management infrastructure that (hopefully) makes sense for these different types of containers, regardless of whether they have PCI, QEMU, etc (e.g. things that are inherent to KVM, but not others).
I hope this helps to clarify the project :)
Kind Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 15:50 ` Avi Kivity
@ 2009-08-06 16:55 ` Gregory Haskins
2009-08-09 7:48 ` Avi Kivity
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Haskins @ 2009-08-06 16:55 UTC (permalink / raw)
To: Arnd Bergmann, Avi Kivity
Cc: alacrityvm-devel, Michael S. Tsirkin, kvm, linux-kernel, netdev
>>> On 8/6/2009 at 11:50 AM, in message <4A7AFBE3.5080200@redhat.com>, Avi Kivity
<avi@redhat.com> wrote:
> On 08/06/2009 06:40 PM, Arnd Bergmann wrote:
>> 3. The ioq method seems to be the real core of your work that makes
>> venet perform better than virtio-net with its virtqueues. I don't see
>> any reason to doubt that your claim is correct. My conclusion from
>> this would be to add support for ioq to virtio devices, alongside
>> virtqueues, but to leave out the extra bus_type and probing method.
>>
>
> The current conjecture is that ioq outperforms virtio because the host
> side of ioq is implemented in the host kernel, while the host side of
> virtio is implemented in userspace. AFAIK, no one pointed out
> differences in the protocol which explain the differences in performance.
There *are* protocol difference that matter, though I think they are slowly being addressed.
For an example: Earlier versions of virtio-pci had a single interrupt for all ring events, and you had to do an extra MMIO cycle to learn the proper context. That will hurt...a _lot_ especially for latency. I think recent versions of KVM switched to MSI-X per queue which fixed this particular ugly.
However, generally I think Avi is right. The main reason why it outperforms virtio-pci by such a large margin has more to do with all the various inefficiencies in the backend (such as requiring multiple hops U->K, K->U per packet), coarse locking, lack of parallel processing, etc. I went through and streamlined all the bottlenecks (such as putting the code in the kernel, reducing locking/context switches, etc).
I have every reason to believe that someone will skills/time equal to myself could develop a virtio-based backend that does not use vbus and achieve similar numbers. However, as stated in my last reply, I am interested in this backend supporting more than KVM, and I designed vbus to fill that role. Therefore, it does not interest me to endeavor such an effort if it doesn't involve a backend that is independent of KVM.
Based on this, I will continue my efforts surrounding to use of vbus including its use to accelerate KVM for AlacrityVM. If I can find a way to do this in such a way that KVM upstream finds acceptable, I would be very happy and will work towards whatever that compromise might be. OTOH, if the KVM community is set against the concept of a generalized/shared backend, and thus wants to use some other approach that does not involve vbus, that is fine too. Choice is one of the great assets of open source, eh? :)
Kind Regards,
-Greg
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 16:29 ` Gregory Haskins
@ 2009-08-06 23:23 ` Ira W. Snyder
0 siblings, 0 replies; 27+ messages in thread
From: Ira W. Snyder @ 2009-08-06 23:23 UTC (permalink / raw)
To: Gregory Haskins
Cc: Arnd Bergmann, alacrityvm-devel, Avi Kivity, Michael S. Tsirkin,
kvm, linux-kernel, netdev
On Thu, Aug 06, 2009 at 10:29:08AM -0600, Gregory Haskins wrote:
> >>> On 8/6/2009 at 11:40 AM, in message <200908061740.04276.arnd@arndb.de>, Arnd
> Bergmann <arnd@arndb.de> wrote:
> > On Thursday 06 August 2009, Gregory Haskins wrote:
[ big snip ]
> >
> > 3. The ioq method seems to be the real core of your work that makes
> > venet perform better than virtio-net with its virtqueues. I don't see
> > any reason to doubt that your claim is correct. My conclusion from
> > this would be to add support for ioq to virtio devices, alongside
> > virtqueues, but to leave out the extra bus_type and probing method.
>
> While I appreciate the sentiment, I doubt that is actually whats helping here.
>
> There are a variety of factors that I poured into venet/vbus that I think contribute to its superior performance. However, the difference in the ring design I do not think is one if them. In fact, in many ways I think Rusty's design might turn out to be faster if put side by side because he was much more careful with cacheline alignment than I was. Also note that I was careful to not pick one ring vs the other ;) They both should work.
IMO, the virtio vring design is very well thought out. I found it
relatively easy to port to a host+blade setup, and run virtio-net over a
physical PCI bus, connecting two physical CPUs.
>
> IMO, we are only looking at the tip of the iceberg when looking at this purely as the difference between virtio-pci vs virtio-vbus, or venet vs virtio-net.
>
> Really, the big thing I am working on here is the host side device-model. The idea here was to design a bus model that was conducive to high performance, software to software IO that would work in a variety of environments (that may or may not have PCI). KVM is one such environment, but I also have people looking at building other types of containers, and even physical systems (host+blade kind of setups).
>
> The idea is that the "connector" is modular, and then something like virtio-net or venet "just work": in kvm, in the userspace container, on the blade system.
>
> It provides a management infrastructure that (hopefully) makes sense for these different types of containers, regardless of whether they have PCI, QEMU, etc (e.g. things that are inherent to KVM, but not others).
>
> I hope this helps to clarify the project :)
>
I think this is the major benefit of vbus. I've only started studying
the vbus code, so I don't have lots to say yet. The overview of the
management interface makes it look pretty good.
Getting two virtio-net drivers hooked together in my virtio-over-PCI
patches was nasty. If you read the thread that followed, you'll see
the lack of a management interface as a concern of mine. It was
basically decided that it could come "later". The configfs interface
vbus provides is pretty nice, IMO.
Just my two cents,
Ira
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 16:28 ` Pantelis Koukousoulas
@ 2009-08-07 12:14 ` Gregory Haskins
0 siblings, 0 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-07 12:14 UTC (permalink / raw)
To: Pantelis Koukousoulas; +Cc: kvm, alacrityvm-devel
[-- Attachment #1: Type: text/plain, Size: 2502 bytes --]
[not sure if it was intentional, but you dropped the CC list.
Therefore, I didn't see this until I caught up on my kvm@vger reading]
Pantelis Koukousoulas wrote:
> How hard would it be to implement virtio over vbus and perhaps the
> virtio-net backend?
It should be relatively trivial. I have already written the transport
(called virtio-vbus) that would allow the existing front-end
(virtio-net) to work without modification.
http://lkml.org/lkml/2009/4/21/427
All that is needed is to take venet-tap as an example and port it to
something virtio compatible (via that patch I posted) on the backend. I
have proposed this as an alternative to venet, but so far I have not had
any takers to help with this effort. Likewise, I am too busy with the
infrastructure to take this on myself.
>
> This would leave only one variable in the comparison, clear misconceptions and
> make evaluation easier by judging each of vbus, venet etc separately on its own
> merits.
>
> The way things are now, it is unclear exactly where those performance
> improvements are coming from (or how much each component contributes)
> because there are too many variables.
>
> Replacing virtio-net by venet would be a hard proposition if only because
> virtio-net has (closed source) windows drivers available. There has to be
> shown that venet by itself does something significantly better that
> virtio-net can't be modified to do comparably well.
I am not proposing anyone replace virtio-net. It will continue to work
fine despite the existence of an alternative, and KVM can continue to
standardize on it if that is what KVM wants to do.
>
> Having venet in addition to virtio-net is also difficult, given that having only
> one set of paravirtual drivers in the kernel was the whole point behind virtio.
As it stands right now, virtio-net fails to meet my performance goals,
and venet meets them (or at least, gets much closer, but I will not
rest..). So, at least for AlacrityVM, I will continue to use and
promote it when performance matters. If at some time in the future I
can get virtio-net to work in my environment in a comparable and
satisfactory way, I will consider migrating to it and deprecating venet.
Until then, having two drivers is ok, and no-one has to use the one they
don't like. I certainly do not think having more than one driver that
speaks 802.x ethernet in the kernel tree is without precedent. ;)
Kind Regards,
-Greg
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 267 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 8:19 ` [PATCH 0/7] AlacrityVM guest drivers Reply-To: Michael S. Tsirkin
2009-08-06 10:17 ` Michael S. Tsirkin
2009-08-06 12:08 ` Gregory Haskins
@ 2009-08-07 14:19 ` Anthony Liguori
2009-08-07 15:05 ` [PATCH 0/7] AlacrityVM guest drivers Gregory Haskins
2 siblings, 1 reply; 27+ messages in thread
From: Anthony Liguori @ 2009-08-07 14:19 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Gregory Haskins, linux-kernel, alacrityvm-devel, netdev, kvm
Michael S. Tsirkin wrote:
>
>> This series includes the basic plumbing, as well as the driver for
>> accelerated 802.x (ethernet) networking.
>>
>
> The graphs comparing virtio with vbus look interesting.
>
1gbit throughput on a 10gbit link? I have a hard time believing that.
I've seen much higher myself. Can you describe your test setup in more
detail?
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers
2009-08-07 14:19 ` Anthony Liguori
@ 2009-08-07 15:05 ` Gregory Haskins
2009-08-07 15:46 ` Anthony Liguori
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Haskins @ 2009-08-07 15:05 UTC (permalink / raw)
To: Anthony Liguori
Cc: Michael S. Tsirkin, Gregory Haskins, linux-kernel,
alacrityvm-devel, netdev, kvm
[-- Attachment #1: Type: text/plain, Size: 3158 bytes --]
Anthony Liguori wrote:
> Michael S. Tsirkin wrote:
>>
>>> This series includes the basic plumbing, as well as the driver for
>>> accelerated 802.x (ethernet) networking.
>>>
>>
>> The graphs comparing virtio with vbus look interesting.
>>
>
> 1gbit throughput on a 10gbit link? I have a hard time believing that.
>
> I've seen much higher myself. Can you describe your test setup in more
> detail?
Sure,
For those graphs, two 8-core x86_64 boxes with Chelsio T3 10GE connected
back to back via cross over with 1500mtu. The kernel version was as
posted. The qemu version was generally something very close to
qemu-kvm.git HEAD at the time the data was gathered, but unfortunately I
didn't seem to log this info.
For KVM, we take one of those boxes and run a bridge+tap configuration
on top of that. We always run the server on the bare-metal machine on
the remote side of the link regardless of whether we run the client in a
VM or baremetal.
For guests, virtio-net and venet connect to the same linux bridge
instance, I just "ifdown eth0 / ifup eth1" (or vice versa) and repeat
the same test. I do this multiple times (usually about 10) and average
the result. I use several different programs, such as netperf, rsync,
and ping to take measurements.
That said, note that the graphs were from earlier kernel runs (2.6.28,
29-rc8). The most recent data I can find that I published is for
2.6.29, announced with the vbus-v3 release back in April:
http://lkml.org/lkml/2009/4/21/408
In it, the virtio-net throughput numbers are substantially higher and
possibly more in line with your expectations (4.5gb/s) (though notably
still lagging venet, which weighed in at 5.6gb/s).
Generally, I find that the virtio-net exhibits non-deterministic results
from release to release. I suspect (as we have discussed) the
tx-mitigation scheme. Some releases buffer the daylights out of the
stream, and virtio gets close(r) throughput (e.g. 4.5g vs 5.8g, but
absolutely terrible latency (4000us vs 65us). Other releases it seems
to operate with more of a compromise (1.3gb/s vs 3.8gb/s, but 350us vs
85us).
I do not understand what causes the virtio performance fluctuation, as I
use the same kernel config across builds, and I do not typically change
the qemu userspace. Note that some general fluctuation is evident
across the board just from kernel to kernel. I am referring to more of
the disparity in throughput vs latency than the ultimate numbers, as all
targets seem to scale max throughput about the same per kernel.
That said, I know I need to redo the graphs against HEAD (31-rc5, and
perhaps 30, and kvm.git). I've been heads down with the eventfd
interfaces since vbus-v3 so I havent been as active with generating the
results. I did confirm that vbus-v4 (alacrityvm-v0.1) still produces a
similar graph, but I didn't gather the data as scientifically as I would
feel comfortable publishing a graph for. This is on the TODO list.
If there is another patch-series/tree I should be using for comparison,
please point me at it.
HTH
Kind Regards,
-Greg
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 267 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers
2009-08-07 15:05 ` [PATCH 0/7] AlacrityVM guest drivers Gregory Haskins
@ 2009-08-07 15:46 ` Anthony Liguori
2009-08-07 18:04 ` Gregory Haskins
0 siblings, 1 reply; 27+ messages in thread
From: Anthony Liguori @ 2009-08-07 15:46 UTC (permalink / raw)
To: Gregory Haskins
Cc: Michael S. Tsirkin, Gregory Haskins, linux-kernel,
alacrityvm-devel, netdev, kvm
Gregory Haskins wrote:
> That said, note that the graphs were from earlier kernel runs (2.6.28,
> 29-rc8). The most recent data I can find that I published is for
> 2.6.29, announced with the vbus-v3 release back in April:
>
> http://lkml.org/lkml/2009/4/21/408
>
> In it, the virtio-net throughput numbers are substantially higher and
> possibly more in line with your expectations (4.5gb/s) (though notably
> still lagging venet, which weighed in at 5.6gb/s).
>
Okay, that makes more sense. Would be nice to update the graphs as they
make virtio look really, really bad :-)
> Generally, I find that the virtio-net exhibits non-deterministic results
> from release to release. I suspect (as we have discussed) the
> tx-mitigation scheme. Some releases buffer the daylights out of the
> stream, and virtio gets close(r) throughput (e.g. 4.5g vs 5.8g, but
> absolutely terrible latency (4000us vs 65us). Other releases it seems
> to operate with more of a compromise (1.3gb/s vs 3.8gb/s, but 350us vs
> 85us).
>
Are you using kvm modules or a new kernel? There was some timer
infrastructure changes around 28/29 and it's possible that the system
your on is now detecting an hpet which will result in a better time
source. That could have an affect on mitigation.
> If there is another patch-series/tree I should be using for comparison,
> please point me at it.
>
No, I think it's fair to look at upstream Linux. Looking at the latest
bits would be nice though because there are some virtio friendly changes
recently like MSI-x and GRO.
Since you're using the latest vbus bits, it makes sense to compare
against the latest virtio bits.
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers
2009-08-07 15:46 ` Anthony Liguori
@ 2009-08-07 18:04 ` Gregory Haskins
0 siblings, 0 replies; 27+ messages in thread
From: Gregory Haskins @ 2009-08-07 18:04 UTC (permalink / raw)
To: Anthony Liguori
Cc: Michael S. Tsirkin, Gregory Haskins, linux-kernel,
alacrityvm-devel, netdev, kvm
[-- Attachment #1: Type: text/plain, Size: 2218 bytes --]
Anthony Liguori wrote:
> Gregory Haskins wrote:
>> That said, note that the graphs were from earlier kernel runs (2.6.28,
>> 29-rc8). The most recent data I can find that I published is for
>> 2.6.29, announced with the vbus-v3 release back in April:
>>
>> http://lkml.org/lkml/2009/4/21/408
>>
>> In it, the virtio-net throughput numbers are substantially higher and
>> possibly more in line with your expectations (4.5gb/s) (though notably
>> still lagging venet, which weighed in at 5.6gb/s).
>>
>
> Okay, that makes more sense. Would be nice to update the graphs as they
> make virtio look really, really bad :-)
Yeah, they are certainly ripe for an update. (Note that I was
unilaterally stale on venet numbers, too) ;)
>
>> Generally, I find that the virtio-net exhibits non-deterministic results
>> from release to release. I suspect (as we have discussed) the
>> tx-mitigation scheme. Some releases buffer the daylights out of the
>> stream, and virtio gets close(r) throughput (e.g. 4.5g vs 5.8g, but
>> absolutely terrible latency (4000us vs 65us). Other releases it seems
>> to operate with more of a compromise (1.3gb/s vs 3.8gb/s, but 350us vs
>> 85us).
>>
>
> Are you using kvm modules or a new kernel?
I just build the entire kernel from git.
> There was some timer
> infrastructure changes around 28/29 and it's possible that the system
> your on is now detecting an hpet which will result in a better time
> source. That could have an affect on mitigation.
Yeah, I suspect you are right. I always kept the .config and machine
constant, but I *do* bounce around kernel versions so perhaps I got
hosed by a make-oldconfig cycle somewhere along the way.
>
>> If there is another patch-series/tree I should be using for comparison,
>> please point me at it.
>>
>
> No, I think it's fair to look at upstream Linux. Looking at the latest
> bits would be nice though because there are some virtio friendly changes
> recently like MSI-x and GRO.
Yeah, I will definitely include kvm.git in addition to whatever is
current from Linus. I already have adopted using the latest
qemu-kvm.git into my workflow.
Regards,
-Greg
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 267 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH 0/7] AlacrityVM guest drivers Reply-To:
2009-08-06 16:55 ` Gregory Haskins
@ 2009-08-09 7:48 ` Avi Kivity
0 siblings, 0 replies; 27+ messages in thread
From: Avi Kivity @ 2009-08-09 7:48 UTC (permalink / raw)
To: Gregory Haskins
Cc: Arnd Bergmann, alacrityvm-devel, Michael S. Tsirkin, kvm,
linux-kernel, netdev
On 08/06/2009 07:55 PM, Gregory Haskins wrote:
> Based on this, I will continue my efforts surrounding to use of vbus including its use to accelerate KVM for AlacrityVM. If I can find a way to do this in such a way that KVM upstream finds acceptable, I would be very happy and will work towards whatever that compromise might be. OTOH, if the KVM community is set against the concept of a generalized/shared backend, and thus wants to use some other approach that does not involve vbus, that is fine too. Choice is one of the great assets of open source, eh? :)
>
KVM upstream (me) doesn't have much say regarding vbus. I am not a
networking expert and I'm not the virtio or networking stack maintainer,
so I'm not qualified to accept or reject the code. What I am able to do
is make sure that kvm can efficiently work with any driver/device stack;
this is why ioeventfd/irqfd were merged.
I still think vbus is a duplication of effort; I understand vbus has
larger scope than virtio, but I still think these problems could have
been solved within the existing virtio stack.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2009-08-09 7:43 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20090803171030.17268.26962.stgit@dev.haskins.net>
2009-08-06 8:19 ` [PATCH 0/7] AlacrityVM guest drivers Reply-To: Michael S. Tsirkin
2009-08-06 10:17 ` Michael S. Tsirkin
2009-08-06 12:09 ` Gregory Haskins
2009-08-06 12:08 ` Gregory Haskins
2009-08-06 12:24 ` Michael S. Tsirkin
2009-08-06 13:00 ` Gregory Haskins
2009-08-06 12:54 ` Avi Kivity
2009-08-06 13:03 ` Gregory Haskins
2009-08-06 13:44 ` Avi Kivity
2009-08-06 13:45 ` Gregory Haskins
2009-08-06 13:57 ` Avi Kivity
2009-08-06 14:06 ` Gregory Haskins
2009-08-06 15:40 ` Arnd Bergmann
2009-08-06 15:45 ` Michael S. Tsirkin
2009-08-06 16:28 ` Pantelis Koukousoulas
2009-08-07 12:14 ` Gregory Haskins
2009-08-06 15:50 ` Avi Kivity
2009-08-06 16:55 ` Gregory Haskins
2009-08-09 7:48 ` Avi Kivity
2009-08-06 16:29 ` Gregory Haskins
2009-08-06 23:23 ` Ira W. Snyder
2009-08-06 13:59 ` Michael S. Tsirkin
2009-08-06 14:07 ` Gregory Haskins
2009-08-07 14:19 ` Anthony Liguori
2009-08-07 15:05 ` [PATCH 0/7] AlacrityVM guest drivers Gregory Haskins
2009-08-07 15:46 ` Anthony Liguori
2009-08-07 18:04 ` Gregory Haskins
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).