netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Bader <stefan.bader@canonical.com>
To: Zoltan Kiss <zoltan.kiss@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	Paul Durrant <paul.durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize
Date: Mon, 01 Dec 2014 15:13:37 +0100	[thread overview]
Message-ID: <547C7791.3090206@canonical.com> (raw)
In-Reply-To: <547C742E.6060801@linaro.org>

[-- Attachment #1: Type: text/plain, Size: 3372 bytes --]

On 01.12.2014 14:59, Zoltan Kiss wrote:
> 
> 
> On 01/12/14 13:36, David Vrabel wrote:
>> On 01/12/14 08:55, Stefan Bader wrote:
>>> On 11.08.2014 19:32, Zoltan Kiss wrote:
>>>> There is a long known problem with the netfront/netback interface: if the guest
>>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
>>>> it gets dropped. The reason is that netback maps these slots to a frag in the
>>>> frags array, which is limited by size. Having so many slots can occur since
>>>> compound pages were introduced, as the ring protocol slice them up into
>>>> individual (non-compound) page aligned slots. The theoretical worst case
>>>> scenario looks like this (note, skbs are limited to 64 Kb here):
>>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
>>>> using 2 slots
>>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
>>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
>>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
>>>> Although I don't think this 51 slots skb can really happen, we need a solution
>>>> which can deal with every scenario. In real life there is only a few slots
>>>> overdue, but usually it causes the TCP stream to be blocked, as the retry will
>>>> most likely have the same buffer layout.
>>>> This patch solves this problem by linearizing the packet. This is not the
>>>> fastest way, and it can fail much easier as it tries to allocate a big linear
>>>> area for the whole packet, but probably easier by an order of magnitude than
>>>> anything else. Probably this code path is not touched very frequently anyway.
>>>>
>>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>>> Cc: Wei Liu <wei.liu2@citrix.com>
>>>> Cc: Ian Campbell <Ian.Campbell@citrix.com>
>>>> Cc: Paul Durrant <paul.durrant@citrix.com>
>>>> Cc: netdev@vger.kernel.org
>>>> Cc: linux-kernel@vger.kernel.org
>>>> Cc: xen-devel@lists.xenproject.org
>>>
>>> This does not seem to be marked explicitly as stable. Has someone already asked
>>> David Miller to put it on his stable queue? IMO it qualifies quite well and the
>>> actual change should be simple to pick/backport.
>>
>> I think it's a candidate, yes.
>>
>> Can you expand on the user visible impact of the bug this patch fixes?
>> I think it results in certain types of traffic not working (because the
>> domU always generates skb's with the problematic frag layout), but I
>> can't remember the details.
> 
> Yes, this line in the comment talks about it: "In real life there is only a few
> slots overdue, but usually it causes the TCP stream to be blocked, as the retry
> will most likely have the same buffer layout."
> Maybe we can add what kind of traffic triggered this so far, AFAIK NFS was one
> of them, and Stefan had an another use case. But my memories are blur about this.

We had some report about some web-app hitting packet losses. I suspect that also
was streaming something. For a easy trigger we found redis-benchmark (part of
the redis keyserver) with a larger (iirc 1kB) payload would trigger the
fragmentation/exceeding pages to happen. Though I think it did not fail but
showed a performance drop instead (from memory which also suffers from loosing
detail).

-Stefan
> 
> Zoli



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2014-12-01 14:13 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-11 17:32 [PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize Zoltan Kiss
2014-08-11 21:57 ` David Miller
2014-12-01  8:55 ` [Xen-devel] " Stefan Bader
2014-12-01 13:36   ` David Vrabel
2014-12-01 13:59     ` Zoltan Kiss
2014-12-01 14:13       ` Stefan Bader [this message]
2014-12-08 10:19   ` Luis Henriques
2014-12-08 11:11     ` David Vrabel
2014-12-09  9:54       ` Luis Henriques

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=547C7791.3090206@canonical.com \
    --to=stefan.bader@canonical.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=david.vrabel@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    --cc=zoltan.kiss@citrix.com \
    --cc=zoltan.kiss@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).