From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: Ravi Kerur <rkerur@gmail.com>,
netdev@vger.kernel.org, rusty@rustcorp.com.au
Subject: Re: [PATCH v1 1/3] virtio-net: Using single MSIX IRQ for TX/RX Q pair
Date: Wed, 28 Oct 2015 09:21:34 +0200 [thread overview]
Message-ID: <20151028091756-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <56303D63.4040302@redhat.com>
On Wed, Oct 28, 2015 at 11:13:39AM +0800, Jason Wang wrote:
>
>
> On 10/27/2015 04:38 PM, Michael S. Tsirkin wrote:
> > On Mon, Oct 26, 2015 at 10:52:47AM -0700, Ravi Kerur wrote:
> >> Ported earlier patch from Jason Wang (dated 12/26/2014).
> >>
> >> This patch tries to reduce the number of MSIX irqs required for
> >> virtio-net by sharing a MSIX irq for each TX/RX queue pair through
> >> channels. If transport support channel, about half of the MSIX irqs
> >> were reduced.
> >>
> >> Signed-off-by: Ravi Kerur <rkerur@gmail.com>
> > Why bother BTW?
>
> The reason is we want to save the number of interrupt vectors used.
> Booting a guest with 256 queues with current driver will result all
> tx/rx queues shares a single vector. This is suboptimal.
With a single CPU? But what configures so many queues? Why do it?
> With this
> series, half could be saved.
At cost of e.g. inability to balance the interrupts.
> And more complex policy could be applied on
> top (e.g limit the number of vectors used by driver).
If that's the motivation, I'd like to see a draft of that more complex
policy first.
> > Looks like this is adding a bunch of overhead
> > on data path - to what end?
>
> I agree some benchmark is needed for this.
>
> > Maybe you have a huge number of these devices ... but in that case, how
> > about sharing the config interrupt instead?
> > That's only possible if host supports VIRTIO_1
> > (so we can detect config interrupt by reading the ISR).
> >
> >
> >
> >> ---
> >> drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++++++-
> >> 1 file changed, 28 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >> index d8838ded..d705cce 100644
> >> --- a/drivers/net/virtio_net.c
> >> +++ b/drivers/net/virtio_net.c
> >> @@ -72,6 +72,9 @@ struct send_queue {
> >>
> >> /* Name of the send queue: output.$index */
> >> char name[40];
> >> +
> >> + /* Name of the channel, shared with irq. */
> >> + char channel_name[40];
> >> };
> >>
> >> /* Internal representation of a receive virtqueue */
> >> @@ -1529,6 +1532,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >> int ret = -ENOMEM;
> >> int i, total_vqs;
> >> const char **names;
> >> + const char **channel_names;
> >> + unsigned *channels;
> >>
> >> /* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
> >> * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
> >> @@ -1548,6 +1553,17 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >> if (!names)
> >> goto err_names;
> >>
> >> + channel_names = kmalloc_array(vi->max_queue_pairs,
> >> + sizeof(*channel_names),
> >> + GFP_KERNEL);
> >> + if (!channel_names)
> >> + goto err_channel_names;
> >> +
> >> + channels = kmalloc_array(total_vqs, sizeof(*channels),
> >> + GFP_KERNEL);
> >> + if (!channels)
> >> + goto err_channels;
> >> +
> >> /* Parameters for control virtqueue, if any */
> >> if (vi->has_cvq) {
> >> callbacks[total_vqs - 1] = NULL;
> >> @@ -1562,10 +1578,15 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >> sprintf(vi->sq[i].name, "output.%d", i);
> >> names[rxq2vq(i)] = vi->rq[i].name;
> >> names[txq2vq(i)] = vi->sq[i].name;
> >> + sprintf(vi->sq[i].channel_name, "txrx.%d", i);
> >> + channel_names[i] = vi->sq[i].channel_name;
> >> + channels[rxq2vq(i)] = i;
> >> + channels[txq2vq(i)] = i;
> >> }
> >>
> >> ret = vi->vdev->config->find_vqs(vi->vdev, total_vqs, vqs, callbacks,
> >> - names);
> >> + names, channels, channel_names,
> >> + vi->max_queue_pairs);
> >> if (ret)
> >> goto err_find;
> >>
> >> @@ -1580,6 +1601,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >> vi->sq[i].vq = vqs[txq2vq(i)];
> >> }
> >>
> >> + kfree(channels);
> >> + kfree(channel_names);
> >> kfree(names);
> >> kfree(callbacks);
> >> kfree(vqs);
> >> @@ -1587,6 +1610,10 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >> return 0;
> >>
> >> err_find:
> >> + kfree(channels);
> >> +err_channels:
> >> + kfree(channel_names);
> >> +err_channel_names:
> >> kfree(names);
> >> err_names:
> >> kfree(callbacks);
> >> --
> >> 1.9.1
next prev parent reply other threads:[~2015-10-28 7:21 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-26 17:52 [PATCH v1 1/3] virtio-net: Using single MSIX IRQ for TX/RX Q pair Ravi Kerur
2015-10-26 17:52 ` [PATCH v1 2/3] virtio: vp_find_vqs accept channel setting params Ravi Kerur
2015-10-26 18:12 ` kbuild test robot
2015-10-26 18:14 ` kbuild test robot
2015-10-26 17:52 ` [PATCH v1 3/3] virtio-pci: Introduce channels Ravi Kerur
2015-10-27 5:11 ` [PATCH v1 1/3] virtio-net: Using single MSIX IRQ for TX/RX Q pair Jason Wang
2015-10-27 22:13 ` Ravi Kerur
2015-10-27 8:38 ` Michael S. Tsirkin
2015-10-27 22:17 ` Ravi Kerur
2015-10-28 3:13 ` Jason Wang
2015-10-28 7:21 ` Michael S. Tsirkin [this message]
2015-10-28 7:54 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151028091756-mutt-send-email-mst@redhat.com \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=rkerur@gmail.com \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).