xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Matt Wilson <msw@amazon.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"Palagummi, Siva" <Siva.Palagummi@ca.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH RFC V2] xen/netback: Count ring slots properly when larger MTU sizes are used
Date: Thu, 20 Dec 2012 13:42:04 -0800	[thread overview]
Message-ID: <20121220214203.GA21709@u109add4315675089e695.ant.amazon.com> (raw)
In-Reply-To: <1355997929.26722.17.camel@zakaz.uk.xensource.com>

On Thu, Dec 20, 2012 at 10:05:29AM +0000, Ian Campbell wrote:
> On Tue, 2012-12-18 at 19:43 +0000, Matt Wilson wrote:
[...]
> > I see SKBs with:
> >   skb_headlen(skb) == 8157
> >   offset_in_page(skb->data) == 64
> > 
> > when handling long streaming ingress flows from ixgbe with MTU (on the
> > NIC and both sides of the VIF) set to 9000. When all the SKBs making
> > up the flow have the above property, xen-netback uses 3 pages instead
> > of two. The first buffer gets 4032 bytes copied into it. The next
> > buffer gets 4096 bytes copied into it. The final buffer gets 29 bytes
> > copied into it. See this post in the archives for a more detailed
> > walk through netbk_gop_frag_copy():
> >   http://lists.xen.org/archives/html/xen-devel/2012-12/msg00274.html
> 
> Thanks. This certainly seems wrong for the head bit.
> 
> > What's the down side to making start_new_rx_buffer() always try to
> > fill each buffer?
> 
> As we discussed earlier in the thread it doubles the number of copy ops
> per frag under some circumstances, my gut is that this isn't going to
> hurt but that's just my gut.
> 
> It seems obviously right that the linear part of the SKB should always
> fill entire buffers though. Perhaps the answer is to differentiate
> between the skb->data and the frags?

We've written a patch that does exactly that. It's stable and performs
well in our testing so far. I'll need to forward port it to the latest
Linux tree, test it there, and post.

Matt

      reply	other threads:[~2012-12-20 21:42 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-29 12:21 [PATCH RFC V2] xen/netback: Count ring slots properly when larger MTU sizes are used Palagummi, Siva
2012-08-30  8:07 ` Ian Campbell
2012-08-30 10:26   ` Palagummi, Siva
2012-12-04 23:23   ` Matt Wilson
2012-12-05 11:56     ` Palagummi, Siva
2012-12-06  5:35       ` Matt Wilson
2012-12-11 10:25         ` Palagummi, Siva
2012-12-11 21:34           ` Matt Wilson
2012-12-13 23:12             ` Palagummi, Siva
2012-12-14 18:53               ` Matt Wilson
2012-12-17 11:26                 ` Ian Campbell
2012-12-17 20:09                   ` Matt Wilson
2012-12-18 10:02                     ` Ian Campbell
2012-12-18 19:43                       ` Matt Wilson
2012-12-20 10:05                         ` Ian Campbell
2012-12-20 21:42                           ` Matt Wilson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121220214203.GA21709@u109add4315675089e695.ant.amazon.com \
    --to=msw@amazon.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Siva.Palagummi@ca.com \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).