From: Matt Wilson <msw@amazon.com>
To: Annie Li <annie.li@oracle.com>
Cc: xen-devel@lists.xensource.com, netdev@vger.kernel.org,
Ian.Campbell@citrix.com, wei.liu2@citrix.com
Subject: Re: [PATCH 1/1] xen/netback: correctly calculate required slots of skb.
Date: Tue, 9 Jul 2013 15:14:07 -0700 [thread overview]
Message-ID: <20130709221406.GA13671@u109add4315675089e695.ant.amazon.com> (raw)
In-Reply-To: <1373350520-19985-1-git-send-email-annie.li@oracle.com>
On Tue, Jul 09, 2013 at 02:15:20PM +0800, Annie Li wrote:
> When counting required slots for skb, netback directly uses DIV_ROUND_UP to get
> slots required by header data. This is wrong when offset in the page of header
> data is not zero, and is also inconsistent with following calculation for
> required slot in netbk_gop_skb.
>
> In netbk_gop_skb, required slots are calculated based on offset and len in page
> of header data. It is possible that required slots here is larger than the one
> calculated in earlier netbk_count_requests. This inconsistency directly results
> in rx_req_cons_peek and xen_netbk_rx_ring_full judgement are wrong.
>
> Then it comes to situation the ring is actually full, but netback thinks it is
> not and continues to create responses. This results in response overlaps request
> in the ring, then grantcopy gets wrong grant reference and throws out error,
> for example "(XEN) grant_table.c:1763:d0 Bad grant reference 2949120", the
> grant reference is invalid value here. Netback returns XEN_NETIF_RSP_ERROR(-1)
> to netfront when grant copy status is error, then netfront gets rx->status
> (the status is -1, not really data size now), and throws out error,
> "kernel: net eth1: rx->offset: 0, size: 4294967295". This issue can be reproduced
> by doing gzip/gunzip in nfs share with mtu = 9000, the guest would panic after
> running such test for a while.
I have a similar (but smaller) patch in my queue that Wei looked over
at the Dublin hackathon. I don't have time to rebase and test it right
now, but let me post it for a second data point.
--msw
> This patch is based on 3.10-rc7.
>
> Signed-off-by: Annie Li <annie.li@oracle.com>
> ---
> drivers/net/xen-netback/netback.c | 97 +++++++++++++++++++++++++-----------
> 1 files changed, 67 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 8c20935..7ff9333 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -359,51 +359,88 @@ static bool start_new_rx_buffer(int offset, unsigned long size, int head)
> * the guest. This function is essentially a dry run of
> * netbk_gop_frag_copy.
> */
> -unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb)
> +static void netbk_get_slots(struct xenvif *vif, struct sk_buff *skb,
> + struct page *page, int *copy_off,
> + unsigned long size, unsigned long offset,
> + int *head, int *count)
> {
> - unsigned int count;
> - int i, copy_off;
> + unsigned long bytes;
>
> - count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
> + /* Data must not cross a page boundary. */
> + BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
>
> - copy_off = skb_headlen(skb) % PAGE_SIZE;
> + /* Skip unused frames from start of page */
> + page += offset >> PAGE_SHIFT;
> + offset &= ~PAGE_MASK;
>
> - if (skb_shinfo(skb)->gso_size)
> - count++;
> + while (size > 0) {
> + BUG_ON(offset >= PAGE_SIZE);
> + BUG_ON(*copy_off > MAX_BUFFER_OFFSET);
>
> - for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> - unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
> - unsigned long offset = skb_shinfo(skb)->frags[i].page_offset;
> - unsigned long bytes;
> + bytes = PAGE_SIZE - offset;
>
> - offset &= ~PAGE_MASK;
> + if (bytes > size)
> + bytes = size;
>
> - while (size > 0) {
> - BUG_ON(offset >= PAGE_SIZE);
> - BUG_ON(copy_off > MAX_BUFFER_OFFSET);
> + if (start_new_rx_buffer(*copy_off, bytes, *head)) {
> + *count = *count + 1;
> + *copy_off = 0;
> + }
>
> - bytes = PAGE_SIZE - offset;
> + if (*copy_off + bytes > MAX_BUFFER_OFFSET)
> + bytes = MAX_BUFFER_OFFSET - *copy_off;
>
> - if (bytes > size)
> - bytes = size;
> + *copy_off += bytes;
>
> - if (start_new_rx_buffer(copy_off, bytes, 0)) {
> - count++;
> - copy_off = 0;
> - }
> + offset += bytes;
> + size -= bytes;
>
> - if (copy_off + bytes > MAX_BUFFER_OFFSET)
> - bytes = MAX_BUFFER_OFFSET - copy_off;
> + /* Next frame */
> + if (offset == PAGE_SIZE && size) {
> + BUG_ON(!PageCompound(page));
> + page++;
> + offset = 0;
> + }
>
> - copy_off += bytes;
> + if (*head)
> + *count = *count + 1;
> + *head = 0; /* There must be something in this buffer now. */
> + }
> +}
> +
> +unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb)
> +{
> + int i, copy_off = 0;
> + int nr_frags = skb_shinfo(skb)->nr_frags;
> + unsigned char *data;
> + int head = 1;
> + unsigned int count = 0;
>
> - offset += bytes;
> - size -= bytes;
> + if (skb_shinfo(skb)->gso_size)
> + count++;
>
> - if (offset == PAGE_SIZE)
> - offset = 0;
> - }
> + data = skb->data;
> + while (data < skb_tail_pointer(skb)) {
> + unsigned int offset = offset_in_page(data);
> + unsigned int len = PAGE_SIZE - offset;
> +
> + if (data + len > skb_tail_pointer(skb))
> + len = skb_tail_pointer(skb) - data;
> +
> + netbk_get_slots(vif, skb, virt_to_page(data), ©_off,
> + len, offset, &head, &count);
> + data += len;
> + }
> +
> + for (i = 0; i < nr_frags; i++) {
> + netbk_get_slots(vif, skb,
> + skb_frag_page(&skb_shinfo(skb)->frags[i]),
> + ©_off,
> + skb_frag_size(&skb_shinfo(skb)->frags[i]),
> + skb_shinfo(skb)->frags[i].page_offset,
> + &head, &count);
> }
> +
> return count;
> }
>
next prev parent reply other threads:[~2013-07-09 22:14 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-09 6:15 [PATCH 1/1] xen/netback: correctly calculate required slots of skb Annie Li
2013-07-09 16:16 ` Wei Liu
2013-07-10 1:57 ` annie li
2013-07-10 2:31 ` annie li
2013-07-10 7:48 ` Wei Liu
2013-07-10 8:34 ` annie li
2013-07-09 22:14 ` Matt Wilson [this message]
2013-07-09 22:40 ` [PATCH RFC] xen-netback: calculate the number of slots required for large MTU vifs Matt Wilson
[not found] ` <1373409659-22383-1-git-send-email-msw@amazon.com>
2013-07-10 8:13 ` Wei Liu
[not found] ` <20130710081333.GI19798@zion.uk.xensource.com>
2013-07-10 19:37 ` Wei Liu
[not found] ` <20130710193703.GB20453@zion.uk.xensource.com>
2013-07-11 1:25 ` annie li
2013-07-11 5:14 ` Matt Wilson
[not found] ` <20130711051441.GA5189@u109add4315675089e695.ant.amazon.com>
2013-07-11 6:01 ` annie li
2013-07-11 8:46 ` Wei Liu
[not found] ` <51DE4A1D.8030203@oracle.com>
2013-07-11 13:52 ` annie li
[not found] ` <51DEB8B4.4030201@oracle.com>
2013-07-12 8:49 ` Wei Liu
[not found] ` <20130712084905.GG23269@zion.uk.xensource.com>
2013-07-12 9:19 ` annie li
[not found] ` <51DFCA1F.7040100@oracle.com>
2013-07-18 11:47 ` Ian Campbell
2013-07-16 10:08 ` annie li
[not found] ` <51E51B83.7040302@oracle.com>
2013-07-16 10:27 ` Wei Liu
[not found] ` <20130716102719.GD5674@zion.uk.xensource.com>
2013-07-16 17:40 ` Matt Wilson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130709221406.GA13671@u109add4315675089e695.ant.amazon.com \
--to=msw@amazon.com \
--cc=Ian.Campbell@citrix.com \
--cc=annie.li@oracle.com \
--cc=netdev@vger.kernel.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).