xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: Anirban Chakraborty <abchak@juniper.net>
Cc: annie li <annie.li@oracle.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: large packet support in netfront driver and guest network throughput
Date: Wed, 18 Sep 2013 16:48:07 +0100	[thread overview]
Message-ID: <20130918154807.GA15251@zion.uk.xensource.com> (raw)
In-Reply-To: <CE5DE220.17272%abchak@juniper.net>

On Tue, Sep 17, 2013 at 05:53:43PM +0000, Anirban Chakraborty wrote:
> 
> 
> On 9/17/13 1:25 AM, "Wei Liu" <wei.liu2@citrix.com> wrote:
> 
> >On Tue, Sep 17, 2013 at 10:09:21AM +0800, annie li wrote:
> >><snip>
> >>>>I tried dom0 to dom0 and I got 9.4 Gbps, which is what I expected
> >>>>(with GRO turned on in the physical interface). However, when I run
> >>>>guest to guest, things fall off. Is large packet not supported in
> >>>>netfront? I thought otherwise. I looked at the code and I do not see
> >>>>any call to napi_gro_receive(), rather it is using
> >>>>netif_receive_skb(). netback seems to be sending GSO packets to the
> >>>>netfront, but it is being segmented to 1500 byte (as it appears from
> >>>>the tcpdump).
> >> >>
> >> >OK, I get your problem.
> >> >
> >> >Indeed netfront doesn't make use of GRO API at the moment.
> >> 
> >> This is true.
> >> But I am wondering why large packet is not segmented into mtu size
> >> with upstream kernel? I did see large packets with upsteam kernel on
> >> receive guest(test between 2 domus on same host).
> >> 
> >
> >I think Anirban's setup is different. The traffic is from a DomU on
> >another host.
> >
> >I will need to setup testing environment with 10G link to test this.
> >
> >Anirban, can you share your setup, especially DomU kernel version, are
> >you using upstream kernel in DomU?
> 
> Sure..
> I have two hosts, say h1 and h2 running XenServer 6.1.
> h1 running Centos 6.4, 64bit kernel, say guest1 and h2 running identical
> guest, guest2.
> 

Do you have exact version of your DomU' kernel? Is it available
somewhere online?

> iperf server is running on guest1 with iperf client connecting from guest2.
> 
> I haven't tried with upstream kernel yet. However, what I found out is
> that the netback on the receiving host is transmitting GSO segments to the
> guest (guest1), but the packets are segmented at the netfront interface.
> 

I just tried, with vanilla upstream kernel I can see large packet size
on DomU's side.

I also tried to convert netfront to use GRO API (hopefully I didn't get
it wrong), I didn't see much improvement -- it's quite obvious because I
already saw large packet even without GRO.

If you fancy trying GRO API, see attached patch. Note that you might
need to do some contextual adjustment as this patch is for upstream
kernel.

Wei.

---8<---
>From ca532dd11d7b8f5f8ce9d2b8043dd974d9587cb0 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 18 Sep 2013 16:46:23 +0100
Subject: [PATCH] xen-netfront: convert to GRO API

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 drivers/net/xen-netfront.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 36808bf..dd1011e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -952,7 +952,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		netif_receive_skb(skb);
+		napi_gro_receive(&np->napi, skb);
 	}
 
 	return packets_dropped;
@@ -1051,6 +1051,8 @@ err:
 	if (work_done < budget) {
 		int more_to_do = 0;
 
+		napi_gro_flush(napi, false);
+
 		local_irq_save(flags);
 
 		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
-- 
1.7.10.4


> Annie's setup has both the guests running on the same host, in which case
> packets are looped back.
> 
> -Anirban
> 

  parent reply	other threads:[~2013-09-18 15:48 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-12 17:53 large packet support in netfront driver and guest network throughput Anirban Chakraborty
2013-09-13 11:44 ` Wei Liu
2013-09-13 17:09   ` Anirban Chakraborty
2013-09-16 14:21     ` Wei Liu
2013-09-17  2:09       ` annie li
2013-09-17  8:25         ` Wei Liu
2013-09-17 17:53           ` Anirban Chakraborty
2013-09-18  2:28             ` annie li
2013-09-18 21:06               ` Anirban Chakraborty
2013-09-18 15:48             ` Wei Liu [this message]
2013-09-18 20:38               ` Anirban Chakraborty
2013-09-19  9:41                 ` Wei Liu
2013-09-19 16:59                   ` Anirban Chakraborty
2013-09-19 18:43                     ` Wei Liu
2013-09-19 19:04                       ` Wei Liu
2013-09-19 20:54                         ` Anirban Chakraborty

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130918154807.GA15251@zion.uk.xensource.com \
    --to=wei.liu2@citrix.com \
    --cc=abchak@juniper.net \
    --cc=annie.li@oracle.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).