virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: Re: [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs
Date: Mon, 05 Jan 2015 11:09:42 +0800	[thread overview]
Message-ID: <54AA0076.9030406@redhat.com> (raw)
In-Reply-To: <20150104113613.GC4336@redhat.com>


On 01/04/2015 07:36 PM, Michael S. Tsirkin wrote:
> On Sun, Jan 04, 2015 at 04:38:17PM +0800, Jason Wang wrote:
>> On 12/28/2014 03:52 PM, Michael S. Tsirkin wrote:
>>> On Fri, Dec 26, 2014 at 10:53:42AM +0800, Jason Wang wrote:
>>>> Hi all:
>>>>
>>>> This series try to share MSIX irq for each tx/rx queue pair. This is
>>>> done through:
>>>>
>>>> - introducing virtio pci channel which are group of virtqueues that
>>>>   sharing a single MSIX irq (Patch 1)
>>>> - expose channel setting to virtio core api (Patch 2)
>>>> - try to use channel setting in virtio-net (Patch 3)
>>>>
>>>> For the transport that does not support channel, channel paramters
>>>> were simply ignored. For devices that does not use channel, it can
>>>> simply pass NULL or zero to virito core.
>>>>
>>>> With the patch, 1 MSIX irq were saved for each TX/RX queue pair.
>>>>
>>>> Please review.
>>> How does this sharing affect performance?
>>>
>> Patch 3 only checks more_used() for tx ring which in fact reduces the
>> effect of event index and may introduce more tx interrupts. After fixing
>> this issue, tested with 1 vcpu and 1 queue. No obvious changes in
>> performance were noticed.
>>
>> Thanks
> Is this with or without MQ?

Without MQ. 1 vcpu and 1 queue were used.
> With MQ, it seems easy to believe as interrupts are
> distributed between CPUs.
>
> Without MQ, it should be possible to create UDP workloads where
> processing incoming and outgoing interrupts
> on separate CPUs is a win.

Not sure. Processing on separate CPUs may only win when the system is
not busy. But if we processing a single flow in two cpus, it may lead
extra lock contention and bad cache utilization.
And if we really want to distribute the load, RPS/RFS could be used.

  reply	other threads:[~2015-01-05  3:09 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-26  2:53 [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs Jason Wang
2014-12-26  2:53 ` [RFC PATCH 1/3] virtio-pci: introduce channels Jason Wang
2014-12-26  2:53 ` [RFC PATCH 2/3] virtio: let vp_find_vqs accept channel setting paramters Jason Wang
2014-12-26  2:53 ` [RFC PATCH 3/3] virtio-net: using single MSIX irq for each TX/RX queue pair Jason Wang
2014-12-26 10:17   ` Jason Wang
2014-12-28  7:52 ` [RFC PATCH 0/3] Sharing MSIX irq for tx/rx queue pairs Michael S. Tsirkin
2015-01-04  8:38   ` Jason Wang
2015-01-04 11:36     ` Michael S. Tsirkin
2015-01-05  3:09       ` Jason Wang [this message]
2015-01-05  1:39 ` Rusty Russell
2015-01-05  5:10   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54AA0076.9030406@redhat.com \
    --to=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).