netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Krishna Kumar2 <krkumar2@in.ibm.com>
Cc: anthony@codemonkey.ws, davem@davemloft.net, kvm@vger.kernel.org,
	mst@redhat.com, netdev@vger.kernel.org, rusty@rustcorp.com.au
Subject: Re: [RFC PATCH 0/4] Implement multiqueue virtio-net
Date: Wed, 08 Sep 2010 12:28:21 +0300	[thread overview]
Message-ID: <4C875735.9050808@redhat.com> (raw)
In-Reply-To: <OF2EF80349.03D44EF4-ON65257798.002D0021-65257798.003352A5@in.ibm.com>

  On 09/08/2010 12:22 PM, Krishna Kumar2 wrote:
> Avi Kivity<avi@redhat.com>  wrote on 09/08/2010 01:17:34 PM:
>
>>    On 09/08/2010 10:28 AM, Krishna Kumar wrote:
>>> Following patches implement Transmit mq in virtio-net.  Also
>>> included is the user qemu changes.
>>>
>>> 1. This feature was first implemented with a single vhost.
>>>      Testing showed 3-8% performance gain for upto 8 netperf
>>>      sessions (and sometimes 16), but BW dropped with more
>>>      sessions.  However, implementing per-txq vhost improved
>>>      BW significantly all the way to 128 sessions.
>> Why were vhost kernel changes required?  Can't you just instantiate more
>> vhost queues?
> I did try using a single thread processing packets from multiple
> vq's on host, but the BW dropped beyond a certain number of
> sessions.

Oh - so the interface has not changed (which can be seen from the 
patch).  That was my concern, I remembered that we planned for vhost-net 
to be multiqueue-ready.

The new guest and qemu code work with old vhost-net, just with reduced 
performance, yes?

> I don't have the code and performance numbers for that
> right now since it is a bit ancient, I can try to resuscitate
> that if you want.

No need.

>>> Guest interrupts for a 4 TXQ device after a 5 min test:
>>> # egrep "virtio0|CPU" /proc/interrupts
>>>         CPU0     CPU1     CPU2    CPU3
>>> 40:   0        0        0       0        PCI-MSI-edge  virtio0-config
>>> 41:   126955   126912   126505  126940   PCI-MSI-edge  virtio0-input
>>> 42:   108583   107787   107853  107716   PCI-MSI-edge  virtio0-output.0
>>> 43:   300278   297653   299378  300554   PCI-MSI-edge  virtio0-output.1
>>> 44:   372607   374884   371092  372011   PCI-MSI-edge  virtio0-output.2
>>> 45:   162042   162261   163623  162923   PCI-MSI-edge  virtio0-output.3
>> How are vhost threads and host interrupts distributed?  We need to move
>> vhost queue threads to be colocated with the related vcpu threads (if no
>> extra cores are available) or on the same socket (if extra cores are
>> available).  Similarly, move device interrupts to the same core as the
>> vhost thread.
> All my testing was without any tuning, including binding netperf&
> netserver (irqbalance is also off). I assume (maybe wrongly) that
> the above might give better results?

I hope so!

> Are you suggesting this
> combination:
> 	IRQ on guest:
> 		40: CPU0
> 		41: CPU1
> 		42: CPU2
> 		43: CPU3 (all CPUs are on socket #0)
> 	vhost:
> 		thread #0:  CPU0
> 		thread #1:  CPU1
> 		thread #2:  CPU2
> 		thread #3:  CPU3
> 	qemu:
> 		thread #0:  CPU4
> 		thread #1:  CPU5
> 		thread #2:  CPU6
> 		thread #3:  CPU7 (all CPUs are on socket#1)

May be better to put vcpu threads and vhost threads on the same socket.

Also need to affine host interrupts.

> 	netperf/netserver:
> 		Run on CPUs 0-4 on both sides
>
> The reason I did not optimize anything from user space is because
> I felt showing the default works reasonably well is important.

Definitely.  Heavy tuning is not a useful path for general end users.  
We need to make sure the the scheduler is able to arrive at the optimal 
layout without pinning (but perhaps with hints).

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


  reply	other threads:[~2010-09-08  9:28 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-08  7:28 [RFC PATCH 0/4] Implement multiqueue virtio-net Krishna Kumar
2010-09-08  7:29 ` [RFC PATCH 1/4] Add a new API to virtio-pci Krishna Kumar
2010-09-09  3:49   ` Rusty Russell
2010-09-09  5:23     ` Krishna Kumar2
2010-09-09 12:14       ` Rusty Russell
2010-09-09 13:49         ` Krishna Kumar2
2010-09-10  3:33           ` Rusty Russell
2010-09-12 11:46           ` Michael S. Tsirkin
2010-09-13  4:20             ` Krishna Kumar2
2010-09-13  9:04               ` Michael S. Tsirkin
2010-09-13 15:59                 ` Anthony Liguori
2010-09-13 16:30                   ` Michael S. Tsirkin
2010-09-13 17:00                     ` Avi Kivity
2010-09-15  5:35                       ` Michael S. Tsirkin
2010-09-13 17:40                     ` Anthony Liguori
2010-09-15  5:40                       ` Michael S. Tsirkin
2010-09-08  7:29 ` [RFC PATCH 2/4] Changes for virtio-net Krishna Kumar
2010-09-08  7:29 ` [RFC PATCH 3/4] Changes for vhost Krishna Kumar
2010-09-08  7:29 ` [RFC PATCH 4/4] qemu changes Krishna Kumar
2010-09-08  7:47 ` [RFC PATCH 0/4] Implement multiqueue virtio-net Avi Kivity
2010-09-08  9:22   ` Krishna Kumar2
2010-09-08  9:28     ` Avi Kivity [this message]
2010-09-08 10:17       ` Krishna Kumar2
2010-09-08 14:12         ` Arnd Bergmann
2010-09-08 16:47           ` Krishna Kumar2
2010-09-09 10:40             ` Arnd Bergmann
2010-09-09 13:19               ` Krishna Kumar2
2010-09-08  8:10 ` Michael S. Tsirkin
2010-09-08  9:23   ` Krishna Kumar2
2010-09-08 10:48     ` Michael S. Tsirkin
2010-09-08 12:19       ` Krishna Kumar2
2010-09-08 16:47   ` Krishna Kumar2
     [not found]   ` <OF70542242.6CAA236A-ON65257798.0044A4E0-65257798.005C0E7C@LocalDomain>
2010-09-09  9:45     ` Krishna Kumar2
2010-09-09 23:00       ` Sridhar Samudrala
2010-09-10  5:19         ` Krishna Kumar2
2010-09-12 11:40       ` Michael S. Tsirkin
2010-09-13  4:12         ` Krishna Kumar2
2010-09-13 11:50           ` Michael S. Tsirkin
2010-09-13 16:23             ` Krishna Kumar2
2010-09-15  5:33               ` Michael S. Tsirkin
     [not found]     ` <OF8043B2B7.7048D739-ON65257799.0021A2EE-65257799.00356B3E@LocalDomain>
2010-09-09 13:18       ` Krishna Kumar2
2010-09-08  8:13 ` Michael S. Tsirkin
2010-09-08  9:28   ` Krishna Kumar2

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C875735.9050808@redhat.com \
    --to=avi@redhat.com \
    --cc=anthony@codemonkey.ws \
    --cc=davem@davemloft.net \
    --cc=krkumar2@in.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=rusty@rustcorp.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).