xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: ANNIE LI <annie.li@oracle.com>
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: xennet: skb rides the rocket: 20 slots
Date: Wed, 9 Jan 2013 10:08:50 -0500	[thread overview]
Message-ID: <20130109150850.GI18395@phenom.dumpdata.com> (raw)
In-Reply-To: <50ED1800.1080208@oracle.com>

On Wed, Jan 09, 2013 at 03:10:56PM +0800, ANNIE LI wrote:
> 
> 
> On 2013-1-9 4:55, Sander Eikelenboom wrote:
> >>                  if (unlikely(frags>= MAX_SKB_FRAGS)) {
> >>                          netdev_dbg(vif->dev, "Too many frags\n");
> >>                          return -frags;
> >>                  }
> >I have added some rate limited warns in this function. However none seems to be triggered while the pv-guest reports the "skb rides the rocket" ..
> 
> Oh,  yes, "skb rides the rocket" is a protect mechanism in netfront,
> and it is not caused by netback checking code, but they all concern
> about the same thing(frags >= MAX_SKB_FRAGS ). I thought those
> packets were dropped by backend check, sorry for the confusion.
> 
> In netfront, following code would check whether required slots
> exceed MAX_SKB_FRAGS, and drop skbs which does not meet this
> requirement directly.
> 
>         if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
>                 net_alert_ratelimited(
>                         "xennet: skb rides the rocket: %d slots\n", slots);
>                 goto drop;
>         }
> 
> In netback, following code also compared frags with MAX_SKB_FRAGS,
> and create error response for netfront which does not meet this
> requirment. In this case, netfront will also drop corresponding
> skbs.
> 
>                 if (unlikely(frags >= MAX_SKB_FRAGS)) {
>                         netdev_dbg(vif->dev, "Too many frags\n");
>                         return -frags;
>                 }
> 
> So it is correct that netback log was not print out because those
> packets are drops directly by frontend check, not by backend check.
> Without the frontend check, it is likely that netback check would
> block these skbs and create error response for netfront.
> 
> So two ways are available: workaround in netfront for those packets,
> doing re-fragment copying, but not sure how copying hurt
> performance. Another is to implement in netback, as discussed in

There is already some copying done (the copying of the socket data
from userspace to the kernel) - so the extra copy might not be that
bad as the data can be in the cache. This would probably be a way
to deal with old backends that cannot deal with this new feature-flag.

> "netchannel vs MAX_SKB_FRAGS". Maybe these two mechanism are all
> necessary?

Lets see first if this is indeed the problem. Perhaps a simple debug
patch that just does:

	s/MAX_SKB_FRAGS/DEBUG_MAX_FRAGS/
	#define DEBUG_MAX_FRAGS 21

in both netback and netfront to set the maximum number of frags we can
handle to 21? If that works with Sander test - then yes, it looks like
we really need to get this 'feature-max-skb-frags' done.

  reply	other threads:[~2013-01-09 15:08 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-04 16:28 xennet: skb rides the rocket: 20 slots Sander Eikelenboom
2013-01-07 10:55 ` Ian Campbell
2013-01-07 12:30   ` Sander Eikelenboom
2013-01-07 13:27     ` Ian Campbell
2013-01-07 14:05       ` Sander Eikelenboom
2013-01-07 14:12         ` Ian Campbell
2013-01-08  2:12   ` ANNIE LI
2013-01-08 10:05     ` Ian Campbell
2013-01-08 10:16       ` Paul Durrant
2013-01-08 20:57       ` James Harper
2013-01-08 22:04         ` Konrad Rzeszutek Wilk
2013-01-08 20:55     ` Sander Eikelenboom
2013-01-09  7:10       ` ANNIE LI
2013-01-09 15:08         ` Konrad Rzeszutek Wilk [this message]
2013-01-09 16:34           ` Ian Campbell
2013-01-09 17:05             ` Konrad Rzeszutek Wilk
2013-01-09 18:02               ` Ian Campbell
2013-01-10 11:22           ` ANNIE LI
2013-01-10 12:24             ` Sander Eikelenboom
2013-01-10 12:26             ` Ian Campbell
2013-01-10 15:39               ` Konrad Rzeszutek Wilk
2013-01-10 16:25                 ` Ian Campbell
2013-01-11  7:34               ` ANNIE LI
2013-01-11  9:56                 ` Ian Campbell
2013-01-11 10:09                   ` Paul Durrant
2013-01-11 10:16                     ` Ian Campbell
     [not found]                       ` <50F3D269.6030601@oracle.com>
2013-03-09 12:56                         ` Fwd: " Sander Eikelenboom
     [not found]                         ` <19010312768.20130124094542@eikelenboom.it>
2013-03-09 12:57                           ` Sander Eikelenboom
2013-03-10  5:22                             ` ANNIE LI
2013-03-12 11:37                               ` Ian Campbell
2013-03-15  5:14                             ` annie li
2013-03-15 21:29                               ` Sander Eikelenboom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130109150850.GI18395@phenom.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=annie.li@oracle.com \
    --cc=linux@eikelenboom.it \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).