* [PATCH RFC 0/8] virtio: add guest MSI-X support
@ 2009-04-27 12:31 Michael S. Tsirkin
2009-04-27 14:00 ` Christian Borntraeger
2009-04-27 14:39 ` Anthony Liguori
0 siblings, 2 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-27 12:31 UTC (permalink / raw)
To: Rusty Russell, virtualization, Anthony Liguori, kvm, avi
Add optional MSI-X support: use a vector per virtqueue with
fallback to a common vector and finally to regular interrupt.
Teach all drivers to use it.
I added 2 new virtio operations: request_vqs/free_vqs because MSI
needs to know the total number of vectors upfront.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
Here's a draft set of patches for MSI-X support in the guest. It still
needs to be tested properly, and performance impact measured, but I
thought I'd share it here in the hope of getting some very early
feedback/flames.
Michael S. Tsirkin (8):
virtio: add request_vqs/free_vqs virtio operations
virtio_blk: add request_vqs/free_vqs calls
virtio-rng: add request_vqs/free_vqs calls
virtio_console: add request_vqs/free_vqs calls
virtio_net: add request_vqs/free_vqs calls
virtio_balloon: add request_vqs/free_vqs calls
virtio_pci: split up vp_interrupt
virtio_pci: optional MSI-X support
drivers/block/virtio_blk.c | 9 ++-
drivers/char/hw_random/virtio-rng.c | 10 ++-
drivers/char/virtio_console.c | 8 ++-
drivers/net/virtio_net.c | 16 +++-
drivers/virtio/virtio_balloon.c | 9 ++-
drivers/virtio/virtio_pci.c | 192 ++++++++++++++++++++++++++++++-----
include/linux/virtio_config.h | 35 +++++++
7 files changed, 247 insertions(+), 32 deletions(-)
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 12:31 [PATCH RFC 0/8] virtio: add guest MSI-X support Michael S. Tsirkin
@ 2009-04-27 14:00 ` Christian Borntraeger
2009-04-27 14:32 ` Michael S. Tsirkin
2009-04-27 15:06 ` Avi Kivity
2009-04-27 14:39 ` Anthony Liguori
1 sibling, 2 replies; 20+ messages in thread
From: Christian Borntraeger @ 2009-04-27 14:00 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Rusty Russell, virtualization, Anthony Liguori, kvm, avi
Am Monday 27 April 2009 14:31:36 schrieb Michael S. Tsirkin:
> Add optional MSI-X support: use a vector per virtqueue with
> fallback to a common vector and finally to regular interrupt.
> Teach all drivers to use it.
>
> I added 2 new virtio operations: request_vqs/free_vqs because MSI
> needs to know the total number of vectors upfront.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
I dont know, if that is feasible for MSI, but the transport(virtio_pci) should
already know the number of virtqueues, which should match the number of
vectors, no?
In fact, the transport has to have a way of getting the number of virtqeues
because find_vq returns ENOENT on invalid index numbers.
Christian
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 14:00 ` Christian Borntraeger
@ 2009-04-27 14:32 ` Michael S. Tsirkin
2009-04-27 15:37 ` Christian Borntraeger
2009-04-27 15:06 ` Avi Kivity
1 sibling, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-27 14:32 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Rusty Russell, virtualization, Anthony Liguori, kvm, avi
On Mon, Apr 27, 2009 at 04:00:30PM +0200, Christian Borntraeger wrote:
> Am Monday 27 April 2009 14:31:36 schrieb Michael S. Tsirkin:
> > Add optional MSI-X support: use a vector per virtqueue with
> > fallback to a common vector and finally to regular interrupt.
> > Teach all drivers to use it.
> >
> > I added 2 new virtio operations: request_vqs/free_vqs because MSI
> > needs to know the total number of vectors upfront.
> >
> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
> I dont know, if that is feasible for MSI, but the transport(virtio_pci) should
> already know the number of virtqueues, which should match the number of
> vectors, no?
I think no, the transport can find out the max number of vectors the
device supports (pci_msix_table_size), but not how many virtqueues are
needed by the driver.
As number of virtqueues <= number of vectors,
we could pre-allocate all vectors that host supports, but this seems
a bit drastic as an MSI-X device could support up to 2K vectors.
> In fact, the transport has to have a way of getting the number of virtqeues
> because find_vq returns ENOENT on invalid index numbers.
>
> Christian
So again, I think this is an upper bound supported by host. Right?
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 12:31 [PATCH RFC 0/8] virtio: add guest MSI-X support Michael S. Tsirkin
2009-04-27 14:00 ` Christian Borntraeger
@ 2009-04-27 14:39 ` Anthony Liguori
2009-04-27 17:42 ` Michael S. Tsirkin
1 sibling, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2009-04-27 14:39 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Rusty Russell, virtualization, kvm, avi
Michael S. Tsirkin wrote:
> Add optional MSI-X support: use a vector per virtqueue with
> fallback to a common vector and finally to regular interrupt.
> Teach all drivers to use it.
>
> I added 2 new virtio operations: request_vqs/free_vqs because MSI
> needs to know the total number of vectors upfront.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
Do you have userspace patches for testing?
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 14:00 ` Christian Borntraeger
2009-04-27 14:32 ` Michael S. Tsirkin
@ 2009-04-27 15:06 ` Avi Kivity
2009-04-27 15:39 ` Michael S. Tsirkin
1 sibling, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2009-04-27 15:06 UTC (permalink / raw)
To: Christian Borntraeger
Cc: virtualization, kvm, Anthony Liguori, Michael S. Tsirkin
Christian Borntraeger wrote:
> Am Monday 27 April 2009 14:31:36 schrieb Michael S. Tsirkin:
>
>> Add optional MSI-X support: use a vector per virtqueue with
>> fallback to a common vector and finally to regular interrupt.
>> Teach all drivers to use it.
>>
>> I added 2 new virtio operations: request_vqs/free_vqs because MSI
>> needs to know the total number of vectors upfront.
>>
>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>>
>
> I dont know, if that is feasible for MSI, but the transport(virtio_pci) should
> already know the number of virtqueues, which should match the number of
> vectors, no?
>
> In fact, the transport has to have a way of getting the number of virtqeues
> because find_vq returns ENOENT on invalid index numbers.
>
One thing I thought was to decouple the virtqueue count from the MSI
entry count, and let the guest choose how many MSI entries it wants to
use. This is useful if it wants several queues to share an interrupt,
perhaps it wants both rx and tx rings on one cpu to share a single vector.
So the device could expose a large, constant number of MSI entries, and
in addition expose a table mapping ring number to msi entry number. The
guest could point all rings to one entry, or have each ring use a
private entry, or anything in between.
That saves us the new API (at the expense of a lot more code, but with
added flexibility).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 14:32 ` Michael S. Tsirkin
@ 2009-04-27 15:37 ` Christian Borntraeger
2009-04-27 17:17 ` Michael S. Tsirkin
0 siblings, 1 reply; 20+ messages in thread
From: Christian Borntraeger @ 2009-04-27 15:37 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Rusty Russell, virtualization, Anthony Liguori, kvm, avi
Am Monday 27 April 2009 16:32:56 schrieb Michael S. Tsirkin:
> > I dont know, if that is feasible for MSI, but the transport(virtio_pci) should
> > already know the number of virtqueues, which should match the number of
> > vectors, no?
>
> I think no, the transport can find out the max number of vectors the
> device supports (pci_msix_table_size), but not how many virtqueues are
> needed by the driver.
I was not talking about the number of vectors, but the number of real exisiting virtqueues.
I just read virtio_pci again and yes,it would be a bit hard to get the number
of available virtqueues, because we need to loop over all possible vqs:
(something like)
for (i=0; i < SOMEMAX; i++) {
iowrite16(i, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
if (!ioread16(vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NUM))
break;
}
return i;
which is obviously not very nice.
On the other hand, at the moment we have up to 3 virtqueues per device (network), and we have a current maximum of 16 virtqeues in qemu (VIRTIO_PCI_QUEUE_MAX in virtio.c), which would reduce the number.
> As number of virtqueues <= number of vectors,
> we could pre-allocate all vectors that host supports, but this seems
> a bit drastic as an MSI-X device could support up to 2K vectors.
>
> > In fact, the transport has to have a way of getting the number of virtqeues
> > because find_vq returns ENOENT on invalid index numbers.
> >
> > Christian
>
> So again, I think this is an upper bound supported by host. Right?
Not the upper bound, but the real available virtqueues. (With current qemu
3 for virtio-net, 2 for virtio-console etc.)
Since I use a different transport (drivers/s390/kvm/kvm_virtio.c), my motiviation is to keep the virtio interface as generic as possible. I dont really like the new interface, but I cannot give you silver bullet technical reasons - its more a gut feeling. The interface would work with lguest and s390.
Anyway. Avis suggestion to decouple MSI count and virtqueue count looks
like a promising approach.
Christian
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 15:06 ` Avi Kivity
@ 2009-04-27 15:39 ` Michael S. Tsirkin
2009-04-27 16:59 ` Christian Borntraeger
2009-04-28 6:47 ` Avi Kivity
0 siblings, 2 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-27 15:39 UTC (permalink / raw)
To: Avi Kivity
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
On Mon, Apr 27, 2009 at 06:06:48PM +0300, Avi Kivity wrote:
> Christian Borntraeger wrote:
>> Am Monday 27 April 2009 14:31:36 schrieb Michael S. Tsirkin:
>>
>>> Add optional MSI-X support: use a vector per virtqueue with
>>> fallback to a common vector and finally to regular interrupt.
>>> Teach all drivers to use it.
>>>
>>> I added 2 new virtio operations: request_vqs/free_vqs because MSI
>>> needs to know the total number of vectors upfront.
>>>
>>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>>>
>>
>> I dont know, if that is feasible for MSI, but the transport(virtio_pci) should
>> already know the number of virtqueues, which should match the number of
>> vectors, no?
>>
>> In fact, the transport has to have a way of getting the number of virtqeues
>> because find_vq returns ENOENT on invalid index numbers.
>>
>
> One thing I thought was to decouple the virtqueue count from the MSI
> entry count, and let the guest choose how many MSI entries it wants to
> use. This is useful if it wants several queues to share an interrupt,
> perhaps it wants both rx and tx rings on one cpu to share a single
> vector.
>
> So the device could expose a large, constant number of MSI entries, and
> in addition expose a table mapping ring number to msi entry number. The
> guest could point all rings to one entry, or have each ring use a
> private entry, or anything in between.
Sounds good ... and it might not be a lot of code I think.
> That saves us the new API (at the expense of a lot more code, but with
> added flexibility).
So we'll probably need to rename request_vqs to request_vectors,
but we probably still need the driver to pass the number of
vectors it wants to the transport. Right?
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 15:39 ` Michael S. Tsirkin
@ 2009-04-27 16:59 ` Christian Borntraeger
2009-04-27 17:19 ` Michael S. Tsirkin
2009-04-28 6:47 ` Avi Kivity
1 sibling, 1 reply; 20+ messages in thread
From: Christian Borntraeger @ 2009-04-27 16:59 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Avi Kivity, Rusty Russell, virtualization, Anthony Liguori, kvm
Am Monday 27 April 2009 17:39:36 schrieb Michael S. Tsirkin:
> So we'll probably need to rename request_vqs to request_vectors,
> but we probably still need the driver to pass the number of
> vectors it wants to the transport. Right?
This might be a stupid idea, but would something like the following
be sufficient for you?
Index: linux-2.6/drivers/net/virtio_net.c
===================================================================
--- linux-2.6.orig/drivers/net/virtio_net.c
+++ linux-2.6/drivers/net/virtio_net.c
@@ -1021,6 +1021,7 @@ static unsigned int features[] = {
static struct virtio_driver virtio_net = {
.feature_table = features,
.feature_table_size = ARRAY_SIZE(features),
+ .num_vq_max = 3,
.driver.name = KBUILD_MODNAME,
.driver.owner = THIS_MODULE,
.id_table = id_table,
Index: linux-2.6/include/linux/virtio.h
===================================================================
--- linux-2.6.orig/include/linux/virtio.h
+++ linux-2.6/include/linux/virtio.h
@@ -110,6 +110,7 @@ struct virtio_driver {
const struct virtio_device_id *id_table;
const unsigned int *feature_table;
unsigned int feature_table_size;
+ unsigned int num_vq_max;
int (*probe)(struct virtio_device *dev);
void (*remove)(struct virtio_device *dev);
void (*config_changed)(struct virtio_device *dev);
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 15:37 ` Christian Borntraeger
@ 2009-04-27 17:17 ` Michael S. Tsirkin
0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-27 17:17 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Rusty Russell, virtualization, Anthony Liguori, kvm, avi
On Mon, Apr 27, 2009 at 05:37:25PM +0200, Christian Borntraeger wrote:
> > As number of virtqueues <= number of vectors,
> > we could pre-allocate all vectors that host supports, but this seems
> > a bit drastic as an MSI-X device could support up to 2K vectors.
> >
> > > In fact, the transport has to have a way of getting the number of virtqeues
> > > because find_vq returns ENOENT on invalid index numbers.
> > >
> > > Christian
> >
> > So again, I think this is an upper bound supported by host. Right?
>
> Not the upper bound, but the real available virtqueues. (With current qemu
> 3 for virtio-net, 2 for virtio-console etc.)
Here's what I mean by upper bound: guest and host number of
virtqueues might be different: the host might support
control virtqueue like in virtio-net, but guest might be an older
version and not use it.
> Since I use a different transport (drivers/s390/kvm/kvm_virtio.c), my
> motiviation is to keep the virtio interface as generic as possible. I
> dont really like the new interface, but I cannot give you silver
> bullet technical reasons - its more a gut feeling. The interface would
> work with lguest and s390.
>
> Anyway. Avis suggestion to decouple MSI count and virtqueue count looks
> like a promising approach.
So generally, we add request_vectors which gives us the number of
available MSI vectors and then find_vq gets a vector #?
Does this sound better?
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 16:59 ` Christian Borntraeger
@ 2009-04-27 17:19 ` Michael S. Tsirkin
0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-27 17:19 UTC (permalink / raw)
To: Christian Borntraeger
Cc: Avi Kivity, Rusty Russell, virtualization, Anthony Liguori, kvm
On Mon, Apr 27, 2009 at 06:59:52PM +0200, Christian Borntraeger wrote:
> Am Monday 27 April 2009 17:39:36 schrieb Michael S. Tsirkin:
> > So we'll probably need to rename request_vqs to request_vectors,
> > but we probably still need the driver to pass the number of
> > vectors it wants to the transport. Right?
>
> This might be a stupid idea, but would something like the following
> be sufficient for you?
Yes.
> Index: linux-2.6/drivers/net/virtio_net.c
> ===================================================================
> --- linux-2.6.orig/drivers/net/virtio_net.c
> +++ linux-2.6/drivers/net/virtio_net.c
> @@ -1021,6 +1021,7 @@ static unsigned int features[] = {
> static struct virtio_driver virtio_net = {
> .feature_table = features,
> .feature_table_size = ARRAY_SIZE(features),
> + .num_vq_max = 3,
> .driver.name = KBUILD_MODNAME,
> .driver.owner = THIS_MODULE,
> .id_table = id_table,
> Index: linux-2.6/include/linux/virtio.h
> ===================================================================
> --- linux-2.6.orig/include/linux/virtio.h
> +++ linux-2.6/include/linux/virtio.h
> @@ -110,6 +110,7 @@ struct virtio_driver {
> const struct virtio_device_id *id_table;
> const unsigned int *feature_table;
> unsigned int feature_table_size;
> + unsigned int num_vq_max;
> int (*probe)(struct virtio_device *dev);
> void (*remove)(struct virtio_device *dev);
> void (*config_changed)(struct virtio_device *dev);
Good idea.
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 14:39 ` Anthony Liguori
@ 2009-04-27 17:42 ` Michael S. Tsirkin
0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-27 17:42 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Rusty Russell, virtualization, kvm, avi
On Mon, Apr 27, 2009 at 09:39:06AM -0500, Anthony Liguori wrote:
> Michael S. Tsirkin wrote:
>> Add optional MSI-X support: use a vector per virtqueue with
>> fallback to a common vector and finally to regular interrupt.
>> Teach all drivers to use it.
>>
>> I added 2 new virtio operations: request_vqs/free_vqs because MSI
>> needs to know the total number of vectors upfront.
>>
>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>>
>
> Do you have userspace patches for testing?
>
> Regards,
>
> Anthony Liguori
Not ready yet, sorry.
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-27 15:39 ` Michael S. Tsirkin
2009-04-27 16:59 ` Christian Borntraeger
@ 2009-04-28 6:47 ` Avi Kivity
2009-04-28 17:41 ` Michael S. Tsirkin
1 sibling, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2009-04-28 6:47 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
Michael S. Tsirkin wrote:
>> That saves us the new API (at the expense of a lot more code, but with
>> added flexibility).
>>
>
> So we'll probably need to rename request_vqs to request_vectors,
> but we probably still need the driver to pass the number of
> vectors it wants to the transport. Right?
>
>
I don't think so - virtio will provide the number of interrupts it
supports, and virtio-net will tell it to bind ring X to interrupt Y.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-28 6:47 ` Avi Kivity
@ 2009-04-28 17:41 ` Michael S. Tsirkin
2009-04-28 17:51 ` Avi Kivity
0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-28 17:41 UTC (permalink / raw)
To: Avi Kivity
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
On Tue, Apr 28, 2009 at 09:47:14AM +0300, Avi Kivity wrote:
> Michael S. Tsirkin wrote:
>>> That saves us the new API (at the expense of a lot more code, but
>>> with added flexibility).
>>>
>>
>> So we'll probably need to rename request_vqs to request_vectors,
>> but we probably still need the driver to pass the number of
>> vectors it wants to the transport. Right?
>>
>>
>
> I don't think so - virtio will provide the number of interrupts it
> supports, and virtio-net will tell it to bind ring X to interrupt Y.
This does not work for MSIX - in linux, you must map all MSI-X entries
to interrupt vectors upfront.
So what I see is transports providing something like:
struct virtio_interrupt_mapping {
int virtqueue;
int interrupt;
};
map_vqs_to_interrupt(dev, struct virtio_interrupt_mapping *, int nvirtqueues);
unmap_vqs(dev);
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-28 17:41 ` Michael S. Tsirkin
@ 2009-04-28 17:51 ` Avi Kivity
2009-04-28 18:02 ` Michael S. Tsirkin
0 siblings, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2009-04-28 17:51 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
Michael S. Tsirkin wrote:
> This does not work for MSIX - in linux, you must map all MSI-X entries
> to interrupt vectors upfront.
>
What? that's very inflexible.
Can you point me at the code?
> So what I see is transports providing something like:
>
> struct virtio_interrupt_mapping {
> int virtqueue;
> int interrupt;
> };
>
> map_vqs_to_interrupt(dev, struct virtio_interrupt_mapping *, int nvirtqueues);
> unmap_vqs(dev);
>
Isn't that the same thing? Please explain the flow.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-28 17:51 ` Avi Kivity
@ 2009-04-28 18:02 ` Michael S. Tsirkin
2009-04-28 19:56 ` Anthony Liguori
2009-05-04 9:21 ` Avi Kivity
0 siblings, 2 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-28 18:02 UTC (permalink / raw)
To: Avi Kivity
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
On Tue, Apr 28, 2009 at 08:51:08PM +0300, Avi Kivity wrote:
> Michael S. Tsirkin wrote:
>> This does not work for MSIX - in linux, you must map all MSI-X entries
>> to interrupt vectors upfront.
>>
>
> What? that's very inflexible.
>
> Can you point me at the code?
See pci_enable_msix in include/linux/pci.h
>> So what I see is transports providing something like:
>>
>> struct virtio_interrupt_mapping {
>> int virtqueue;
>> int interrupt;
>> };
>>
>> map_vqs_to_interrupt(dev, struct virtio_interrupt_mapping *, int nvirtqueues);
>> unmap_vqs(dev);
>>
>
> Isn't that the same thing? Please explain the flow.
So to map vq 0 to vector 0, vq 1 to vector 1 and vq 2 to vector 2 the driver would do:
struct virtio_interrupt_mapping mapping[3] = { {0, 0}, {1, 1}, {2, 2} };
vec = map_vqs_to_interrupt(dev, mapping, 3);
if (vec) {
error handling
}
and then find_vq as usual.
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-28 18:02 ` Michael S. Tsirkin
@ 2009-04-28 19:56 ` Anthony Liguori
2009-04-28 21:01 ` Michael S. Tsirkin
2009-05-04 9:21 ` Avi Kivity
1 sibling, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2009-04-28 19:56 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Avi Kivity, Christian Borntraeger, Rusty Russell, virtualization,
kvm
Michael S. Tsirkin wrote:
> So to map vq 0 to vector 0, vq 1 to vector 1 and vq 2 to vector 2 the driver would do:
>
> struct virtio_interrupt_mapping mapping[3] = { {0, 0}, {1, 1}, {2, 2} };
> vec = map_vqs_to_interrupt(dev, mapping, 3);
> if (vec) {
> error handling
> }
>
> and then find_vq as usual.
>
Is it possible to just delay the msix enablement until after the queues
have been finalized (IOW in virtio-pci.c:vp_finalize_features)?
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-28 19:56 ` Anthony Liguori
@ 2009-04-28 21:01 ` Michael S. Tsirkin
0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-04-28 21:01 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Christian Borntraeger, Avi Kivity, kvm, virtualization
On Tue, Apr 28, 2009 at 02:56:15PM -0500, Anthony Liguori wrote:
> Michael S. Tsirkin wrote:
>> So to map vq 0 to vector 0, vq 1 to vector 1 and vq 2 to vector 2 the driver would do:
>>
>> struct virtio_interrupt_mapping mapping[3] = { {0, 0}, {1, 1}, {2, 2} };
>> vec = map_vqs_to_interrupt(dev, mapping, 3);
>> if (vec) {
>> error handling
>> }
>>
>> and then find_vq as usual.
>>
>
> Is it possible to just delay the msix enablement until after the queues
> have been finalized (IOW in virtio-pci.c:vp_finalize_features)?
Yes, but
1. vp_finalize_features returns void, while enabling requested
number of interrupts might fail.
2. we don't know in advance (without trying) whether we'll be able
to allocate a specific number of vectors.
So if we want to give drivers the flexibility to assign queues to
vectors, driver must request vectors, we'll try to allocate
and driver will decide what to do on failure.
After thinking about it, request_vqs was not a bad API for virtio, good
enough for existing drivers. And the PCI space can be made future proof
allowing mapping virtqueues to vectors in case we extend API later.
Thoughts?
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-04-28 18:02 ` Michael S. Tsirkin
2009-04-28 19:56 ` Anthony Liguori
@ 2009-05-04 9:21 ` Avi Kivity
2009-05-04 11:54 ` Michael S. Tsirkin
1 sibling, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2009-05-04 9:21 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
Michael S. Tsirkin wrote:
>>> So what I see is transports providing something like:
>>>
>>> struct virtio_interrupt_mapping {
>>> int virtqueue;
>>> int interrupt;
>>> };
>>>
>>> map_vqs_to_interrupt(dev, struct virtio_interrupt_mapping *, int nvirtqueues);
>>> unmap_vqs(dev);
>>>
>>>
>> Isn't that the same thing? Please explain the flow.
>>
>
> So to map vq 0 to vector 0, vq 1 to vector 1 and vq 2 to vector 2 the driver would do:
>
> struct virtio_interrupt_mapping mapping[3] = { {0, 0}, {1, 1}, {2, 2} };
> vec = map_vqs_to_interrupt(dev, mapping, 3);
> if (vec) {
> error handling
> }
>
> and then find_vq as usual.
>
Yes, that works.
Given that pci_enable_msix() can fail, we can put the retry loop in
virtio-pci, and instead of a static mapping, supply a dynamic mapping:
static void get_vq_interrupt(..., int nr_interrupts, int vq)
{
/* reserve interrupt 0 to config changes; round-robin vqs to
interrupts */
return 1 + (vq % (nr_interrupts - 1));
}
driver_init()
{
map_vqs_to_interrupt(dev, get_vq_interrupt);
}
map_vqs_to_interrupts() would call get_vq_interrupt() for each vq,
assuming the maximum nr_interrupts, and retry with smaller nr_interrupts
on failure.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-05-04 9:21 ` Avi Kivity
@ 2009-05-04 11:54 ` Michael S. Tsirkin
2009-05-04 11:57 ` Avi Kivity
0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2009-05-04 11:54 UTC (permalink / raw)
To: Avi Kivity
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
On Mon, May 04, 2009 at 12:21:28PM +0300, Avi Kivity wrote:
> Michael S. Tsirkin wrote:
>>>> So what I see is transports providing something like:
>>>>
>>>> struct virtio_interrupt_mapping {
>>>> int virtqueue;
>>>> int interrupt;
>>>> };
>>>>
>>>> map_vqs_to_interrupt(dev, struct virtio_interrupt_mapping *, int nvirtqueues);
>>>> unmap_vqs(dev);
>>>>
>>> Isn't that the same thing? Please explain the flow.
>>>
>>
>> So to map vq 0 to vector 0, vq 1 to vector 1 and vq 2 to vector 2 the driver would do:
>>
>> struct virtio_interrupt_mapping mapping[3] = { {0, 0}, {1, 1}, {2, 2} };
>> vec = map_vqs_to_interrupt(dev, mapping, 3);
>> if (vec) {
>> error handling
>> }
>>
>> and then find_vq as usual.
>>
>
> Yes, that works.
>
> Given that pci_enable_msix() can fail, we can put the retry loop in
> virtio-pci, and instead of a static mapping, supply a dynamic mapping:
>
> static void get_vq_interrupt(..., int nr_interrupts, int vq)
> {
> /* reserve interrupt 0 to config changes; round-robin vqs to
> interrupts */
> return 1 + (vq % (nr_interrupts - 1));
> }
>
> driver_init()
> {
> map_vqs_to_interrupt(dev, get_vq_interrupt);
> }
>
> map_vqs_to_interrupts() would call get_vq_interrupt() for each vq,
> assuming the maximum nr_interrupts, and retry with smaller nr_interrupts
> on failure.
Since guest drivers are going to do round-robin most of the time, I
think the right thing to do is to make the API simple, along the lines
proposed by Rusty, and make the guest/host ABI rich enough to support
arbitrary mapping, along the lines proposed by you. We can always change
the API, ABI is harder.
--
MST
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH RFC 0/8] virtio: add guest MSI-X support
2009-05-04 11:54 ` Michael S. Tsirkin
@ 2009-05-04 11:57 ` Avi Kivity
0 siblings, 0 replies; 20+ messages in thread
From: Avi Kivity @ 2009-05-04 11:57 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Christian Borntraeger, Rusty Russell, virtualization,
Anthony Liguori, kvm
Michael S. Tsirkin wrote:
> On Mon, May 04, 2009 at 12:21:28PM +0300, Avi Kivity wrote:
>
>> Michael S. Tsirkin wrote:
>>
>>>>> So what I see is transports providing something like:
>>>>>
>>>>> struct virtio_interrupt_mapping {
>>>>> int virtqueue;
>>>>> int interrupt;
>>>>> };
>>>>>
>>>>> map_vqs_to_interrupt(dev, struct virtio_interrupt_mapping *, int nvirtqueues);
>>>>> unmap_vqs(dev);
>>>>>
>>>>>
>>>> Isn't that the same thing? Please explain the flow.
>>>>
>>>>
>>> So to map vq 0 to vector 0, vq 1 to vector 1 and vq 2 to vector 2 the driver would do:
>>>
>>> struct virtio_interrupt_mapping mapping[3] = { {0, 0}, {1, 1}, {2, 2} };
>>> vec = map_vqs_to_interrupt(dev, mapping, 3);
>>> if (vec) {
>>> error handling
>>> }
>>>
>>> and then find_vq as usual.
>>>
>>>
>> Yes, that works.
>>
>> Given that pci_enable_msix() can fail, we can put the retry loop in
>> virtio-pci, and instead of a static mapping, supply a dynamic mapping:
>>
>> static void get_vq_interrupt(..., int nr_interrupts, int vq)
>> {
>> /* reserve interrupt 0 to config changes; round-robin vqs to
>> interrupts */
>> return 1 + (vq % (nr_interrupts - 1));
>> }
>>
>> driver_init()
>> {
>> map_vqs_to_interrupt(dev, get_vq_interrupt);
>> }
>>
>> map_vqs_to_interrupts() would call get_vq_interrupt() for each vq,
>> assuming the maximum nr_interrupts, and retry with smaller nr_interrupts
>> on failure.
>>
>
> Since guest drivers are going to do round-robin most of the time, I
> think the right thing to do is to make the API simple, along the lines
> proposed by Rusty, and make the guest/host ABI rich enough to support
> arbitrary mapping, along the lines proposed by you. We can always change
> the API, ABI is harder.
>
We can provide the round-robin mapper as a helper, so driver code
doesn't need to implement any callback if they're satisfied with the
default.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2009-05-04 11:58 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-27 12:31 [PATCH RFC 0/8] virtio: add guest MSI-X support Michael S. Tsirkin
2009-04-27 14:00 ` Christian Borntraeger
2009-04-27 14:32 ` Michael S. Tsirkin
2009-04-27 15:37 ` Christian Borntraeger
2009-04-27 17:17 ` Michael S. Tsirkin
2009-04-27 15:06 ` Avi Kivity
2009-04-27 15:39 ` Michael S. Tsirkin
2009-04-27 16:59 ` Christian Borntraeger
2009-04-27 17:19 ` Michael S. Tsirkin
2009-04-28 6:47 ` Avi Kivity
2009-04-28 17:41 ` Michael S. Tsirkin
2009-04-28 17:51 ` Avi Kivity
2009-04-28 18:02 ` Michael S. Tsirkin
2009-04-28 19:56 ` Anthony Liguori
2009-04-28 21:01 ` Michael S. Tsirkin
2009-05-04 9:21 ` Avi Kivity
2009-05-04 11:54 ` Michael S. Tsirkin
2009-05-04 11:57 ` Avi Kivity
2009-04-27 14:39 ` Anthony Liguori
2009-04-27 17:42 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox