netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Bennieston <andrew.bennieston@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, <ian.campbell@citrix.com>,
	<paul.durrant@citrix.com>, <netdev@vger.kernel.org>
Subject: Re: [PATCH V2 net-next 0/5] xen-net{back,front}: Multiple transmit and receive queues
Date: Fri, 14 Feb 2014 14:53:48 +0000	[thread overview]
Message-ID: <52FE2DFC.8050702@citrix.com> (raw)
In-Reply-To: <20140214140635.GA18398@zion.uk.xensource.com>

On 14/02/14 14:06, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
>>
>> This patch series implements multiple transmit and receive queues (i.e.
>> multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows:
>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>      netfront respectively, and modify the rest of the code to use these
>>      as appropriate.
>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>     multiple shared rings and event channels, and code to connect these
>>     as appropriate.
>>   - Patch 5 documents the XenStore keys required for the new feature
>>     in include/xen/interface/io/netif.h
>>
>> All other transmit and receive processing remains unchanged, i.e. there
>> is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>
>> To summarise:
>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>      with a single queue.
>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>      scaling out over available resources, not an efficiency improvement.
>>    * Results depend on the availability of sufficient CPUs, as well as the
>>      distribution of interrupts and the distribution of TCP streams across
>>      the queues.
>>
>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>> frontend and backend, since only one option exists. Future patches to
>> support other frontends (particularly Windows) will need to add some
>> capability to negotiate not only the hash algorithm selection, but also
>> allow the frontend to specify some parameters to this.
>>
>
> This has an impact on the protocol. If the key to select hash algorithm
> is missing then we're assuming L4 is in use.
>
> This either needs to be documented (which is missing in your patch to
> netif.h) or you need to write that key explicitly in XenStore.
>
> I also have a question what would happen if one end advertises one hash
> algorithm then use a different one. This can happen when the
> driver is rogue or buggy. Will it cause the "good guy" to stall? We
> certainly don't want to stall backend, at the very least.

I'm not sure I understand. There is no negotiable selection of hash 
algorithm here. This paragraph refers to a possible future in which we 
may have to support multiple such. These issues will absolutely have to 
be addressed then, but it is completely irrelevant for now.

Andrew.
>
> I don't see relevant code in this series to handle "rogue other end". I
> presume for a simple hash algorithm like L4 is not very important (say,
> even a packet ends up in the wrong queue we can still safely process
> it), or core driver can deal with this all by itself (dropping)?
>
> Wei.
>

  reply	other threads:[~2014-02-14 14:53 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-14 11:50 [PATCH V2 net-next 0/5] xen-net{back, front}: Multiple transmit and receive queues Andrew J. Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 1/5] xen-netback: Factor queue-specific data into queue struct Andrew J. Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 2/5] xen-netback: Add support for multiple queues Andrew J. Bennieston
2014-02-14 14:11   ` Wei Liu
2014-02-14 14:57     ` Andrew Bennieston
2014-02-14 15:36       ` Wei Liu
2014-02-14 15:42         ` Andrew Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 3/5] xen-netfront: Factor queue-specific data into queue struct Andrew J. Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 4/5] xen-netfront: Add support for multiple queues Andrew J. Bennieston
2014-02-14 14:13   ` Wei Liu
2014-02-14 14:58     ` Andrew Bennieston
2014-02-14 11:50 ` [PATCH V2 net-next 5/5] xen-net{back, front}: Document multi-queue feature in netif.h Andrew J. Bennieston
2014-02-14 14:06 ` [PATCH V2 net-next 0/5] xen-net{back,front}: Multiple transmit and receive queues Wei Liu
2014-02-14 14:53   ` Andrew Bennieston [this message]
2014-02-14 15:25     ` Wei Liu
2014-02-14 15:40       ` Andrew Bennieston
2014-02-14 15:52         ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52FE2DFC.8050702@citrix.com \
    --to=andrew.bennieston@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=netdev@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).