netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
@ 2013-09-21 16:05 Wei Liu
  2013-09-22  6:29 ` [Xen-devel] " Jason Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Wei Liu @ 2013-09-21 16:05 UTC (permalink / raw)
  To: netdev; +Cc: xen-devel, Wei Liu, Anirban Chakraborty, Ian Campbell

Anirban was seeing netfront received MTU size packets, which downgraded
throughput. The following patch makes netfront use GRO API which
improves throughput for that case.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 drivers/net/xen-netfront.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 36808bf..5664165 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -952,7 +952,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		netif_receive_skb(skb);
+		napi_gro_receive(&np->napi, skb);
 	}
 
 	return packets_dropped;
@@ -1051,6 +1051,8 @@ err:
 	if (work_done < budget) {
 		int more_to_do = 0;
 
+		napi_gro_flush(napi, false);
+
 		local_irq_save(flags);
 
 		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
@@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
-	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
+	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
+				  NETIF_F_GRO;
 
 	/*
          * Assume that all hw features are available for now. This set
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-21 16:05 [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature Wei Liu
@ 2013-09-22  6:29 ` Jason Wang
  2013-09-22 12:09   ` Wei Liu
  2013-09-22 14:55 ` Eric Dumazet
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Jason Wang @ 2013-09-22  6:29 UTC (permalink / raw)
  To: Wei Liu, netdev; +Cc: Anirban Chakraborty, Ian Campbell, xen-devel

On 09/22/2013 12:05 AM, Wei Liu wrote:
> Anirban was seeing netfront received MTU size packets, which downgraded
> throughput. The following patch makes netfront use GRO API which
> improves throughput for that case.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
> Cc: Ian Campbell <ian.campbell@citrix.com>

Maybe a dumb question: doesn't Xen depends on the driver of host card to
do GRO and pass it to netfront? What the case that netfront can receive
a MTU size packet, for a card that does not support GRO in host? Doing
GRO twice may introduce extra overheads.

Thanks

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-22  6:29 ` [Xen-devel] " Jason Wang
@ 2013-09-22 12:09   ` Wei Liu
  2013-09-22 23:04     ` Anirban Chakraborty
  0 siblings, 1 reply; 15+ messages in thread
From: Wei Liu @ 2013-09-22 12:09 UTC (permalink / raw)
  To: Jason Wang; +Cc: Wei Liu, netdev, Anirban Chakraborty, Ian Campbell, xen-devel

On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
> On 09/22/2013 12:05 AM, Wei Liu wrote:
> > Anirban was seeing netfront received MTU size packets, which downgraded
> > throughput. The following patch makes netfront use GRO API which
> > improves throughput for that case.
> >
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> 
> Maybe a dumb question: doesn't Xen depends on the driver of host card to
> do GRO and pass it to netfront? What the case that netfront can receive

The would be the ideal situation. Netback pushes large packets to
netfront and netfront sees large packets.

> a MTU size packet, for a card that does not support GRO in host? Doing

However Anirban saw the case when backend interface receives large
packets but netfront sees MTU size packets, so my thought is there is
certain configuration that leads to this issue. As we cannot tell
users what to enable and what not to enable so I would like to solve
this within our driver.

> GRO twice may introduce extra overheads.
> 

AIUI if the packet that frontend sees is large already then the GRO path
is quite short which will not introduce heavy penalty, while on the
other hand if packet is segmented doing GRO improves throughput.

Wei.

> Thanks

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-21 16:05 [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature Wei Liu
  2013-09-22  6:29 ` [Xen-devel] " Jason Wang
@ 2013-09-22 14:55 ` Eric Dumazet
  2013-09-22 23:09   ` Anirban Chakraborty
  2013-09-24 16:30 ` [Xen-devel] " Konrad Rzeszutek Wilk
  2013-09-28 19:38 ` David Miller
  3 siblings, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2013-09-22 14:55 UTC (permalink / raw)
  To: Wei Liu; +Cc: netdev, xen-devel, Anirban Chakraborty, Ian Campbell

On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
> Anirban was seeing netfront received MTU size packets, which downgraded
> throughput. The following patch makes netfront use GRO API which
> improves throughput for that case.

> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> +				  NETIF_F_GRO;


This part is not needed.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-22 12:09   ` Wei Liu
@ 2013-09-22 23:04     ` Anirban Chakraborty
  2013-09-23  5:02       ` Jason Wang
  0 siblings, 1 reply; 15+ messages in thread
From: Anirban Chakraborty @ 2013-09-22 23:04 UTC (permalink / raw)
  To: Wei Liu
  Cc: Jason Wang, <netdev@vger.kernel.org>, Ian Campbell,
	<xen-devel@lists.xen.org>


On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:

> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>> throughput. The following patch makes netfront use GRO API which
>>> improves throughput for that case.
>>> 
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> 
>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>> do GRO and pass it to netfront? What the case that netfront can receive
> 
> The would be the ideal situation. Netback pushes large packets to
> netfront and netfront sees large packets.
> 
>> a MTU size packet, for a card that does not support GRO in host? Doing
> 
> However Anirban saw the case when backend interface receives large
> packets but netfront sees MTU size packets, so my thought is there is
> certain configuration that leads to this issue. As we cannot tell
> users what to enable and what not to enable so I would like to solve
> this within our driver.
> 
>> GRO twice may introduce extra overheads.
>> 
> 
> AIUI if the packet that frontend sees is large already then the GRO path
> is quite short which will not introduce heavy penalty, while on the
> other hand if packet is segmented doing GRO improves throughput.
> 

Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already. 

-Anirban

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-22 14:55 ` Eric Dumazet
@ 2013-09-22 23:09   ` Anirban Chakraborty
  2013-09-23  5:58     ` Eric Dumazet
  0 siblings, 1 reply; 15+ messages in thread
From: Anirban Chakraborty @ 2013-09-22 23:09 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Wei Liu, <netdev@vger.kernel.org>,
	<xen-devel@lists.xen.org>, Ian Campbell


On Sep 22, 2013, at 7:55 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
>> Anirban was seeing netfront received MTU size packets, which downgraded
>> throughput. The following patch makes netfront use GRO API which
>> improves throughput for that case.
> 
>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
>> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
>> +				  NETIF_F_GRO;
> 
> 
> This part is not needed.

Shouldn't the flag be set? In dev_gro_receive() we do check if this flag is set or not:

        if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
               goto normal;

-Anirban

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-22 23:04     ` Anirban Chakraborty
@ 2013-09-23  5:02       ` Jason Wang
  2013-09-23  6:22         ` annie li
  0 siblings, 1 reply; 15+ messages in thread
From: Jason Wang @ 2013-09-23  5:02 UTC (permalink / raw)
  To: Anirban Chakraborty, Wei Liu
  Cc: <netdev@vger.kernel.org>, Ian Campbell,
	<xen-devel@lists.xen.org>

On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:
>
>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>> throughput. The following patch makes netfront use GRO API which
>>>> improves throughput for that case.
>>>>
>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>> do GRO and pass it to netfront? What the case that netfront can receive
>> The would be the ideal situation. Netback pushes large packets to
>> netfront and netfront sees large packets.
>>
>>> a MTU size packet, for a card that does not support GRO in host? Doing
>> However Anirban saw the case when backend interface receives large
>> packets but netfront sees MTU size packets, so my thought is there is
>> certain configuration that leads to this issue. As we cannot tell
>> users what to enable and what not to enable so I would like to solve
>> this within our driver.
>>
>>> GRO twice may introduce extra overheads.
>>>
>> AIUI if the packet that frontend sees is large already then the GRO path
>> is quite short which will not introduce heavy penalty, while on the
>> other hand if packet is segmented doing GRO improves throughput.
>>
> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already. 
>
> -Anirban

In this case, even if you still want to do GRO. It's better to find the
root cause of why the GSO packet were segmented (maybe GSO were not
enabled for netback?), since it introduces extra overheads.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-22 23:09   ` Anirban Chakraborty
@ 2013-09-23  5:58     ` Eric Dumazet
  2013-09-23 20:27       ` Anirban Chakraborty
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2013-09-23  5:58 UTC (permalink / raw)
  To: Anirban Chakraborty
  Cc: Wei Liu, <netdev@vger.kernel.org>,
	<xen-devel@lists.xen.org>, Ian Campbell

On Sun, 2013-09-22 at 23:09 +0000, Anirban Chakraborty wrote:
> On Sep 22, 2013, at 7:55 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> > On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
> >> Anirban was seeing netfront received MTU size packets, which downgraded
> >> throughput. The following patch makes netfront use GRO API which
> >> improves throughput for that case.
> > 
> >> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> >> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> >> +				  NETIF_F_GRO;
> > 
> > 
> > This part is not needed.
> 
> Shouldn't the flag be set? In dev_gro_receive() we do check if this flag is set or not:
> 
>         if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
>                goto normal;

Drivers do not set NETIF_F_GRO themselves, they do not need to.

Look at other drivers which are GRO ready : NETIF_F_GRO is enabled by
default by core networking stack, in register_netdevice()


dev->hw_features |= NETIF_F_SOFT_FEATURES;
dev->features |= NETIF_F_SOFT_FEATURES;

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-23  5:02       ` Jason Wang
@ 2013-09-23  6:22         ` annie li
  2013-09-23 20:32           ` Anirban Chakraborty
  0 siblings, 1 reply; 15+ messages in thread
From: annie li @ 2013-09-23  6:22 UTC (permalink / raw)
  To: Jason Wang
  Cc: Anirban Chakraborty, Wei Liu, <netdev@vger.kernel.org>,
	Ian Campbell, <xen-devel@lists.xen.org>


On 2013-9-23 13:02, Jason Wang wrote:
> On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
>> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:
>>
>>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>>> throughput. The following patch makes netfront use GRO API which
>>>>> improves throughput for that case.
>>>>>
>>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>>> do GRO and pass it to netfront? What the case that netfront can receive
>>> The would be the ideal situation. Netback pushes large packets to
>>> netfront and netfront sees large packets.
>>>
>>>> a MTU size packet, for a card that does not support GRO in host? Doing
>>> However Anirban saw the case when backend interface receives large
>>> packets but netfront sees MTU size packets, so my thought is there is
>>> certain configuration that leads to this issue. As we cannot tell
>>> users what to enable and what not to enable so I would like to solve
>>> this within our driver.
>>>
>>>> GRO twice may introduce extra overheads.
>>>>
>>> AIUI if the packet that frontend sees is large already then the GRO path
>>> is quite short which will not introduce heavy penalty, while on the
>>> other hand if packet is segmented doing GRO improves throughput.
>>>
>> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
>> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already.
>>
>> -Anirban
> In this case, even if you still want to do GRO. It's better to find the
> root cause of why the GSO packet were segmented

Totally agree, we need to find the cause why large packets is segmented 
only in different host case.

> (maybe GSO were not
> enabled for netback?), since it introduces extra overheads.

 From Anirban's feedback, large packets can be seen on vif interface, 
and even on guests running on the same host.

Thanks
Annie

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-23  5:58     ` Eric Dumazet
@ 2013-09-23 20:27       ` Anirban Chakraborty
  0 siblings, 0 replies; 15+ messages in thread
From: Anirban Chakraborty @ 2013-09-23 20:27 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: <netdev@vger.kernel.org>, Wei Liu, Ian Campbell,
	<xen-devel@lists.xen.org>


On Sep 22, 2013, at 10:58 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Sun, 2013-09-22 at 23:09 +0000, Anirban Chakraborty wrote:
>> On Sep 22, 2013, at 7:55 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>> 
>>> On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>> throughput. The following patch makes netfront use GRO API which
>>>> improves throughput for that case.
>>> 
>>>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
>>>> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
>>>> +				  NETIF_F_GRO;
>>> 
>>> 
>>> This part is not needed.
>> 
>> Shouldn't the flag be set? In dev_gro_receive() we do check if this flag is set or not:
>> 
>>        if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
>>               goto normal;
> 
> Drivers do not set NETIF_F_GRO themselves, they do not need to.
> 
> Look at other drivers which are GRO ready : NETIF_F_GRO is enabled by
> default by core networking stack, in register_netdevice()
> 
> 
> dev->hw_features |= NETIF_F_SOFT_FEATURES;
> dev->features |= NETIF_F_SOFT_FEATURES;

I didn't realize that the drivers no longer need to set the GRO flag explicitly. It looks like it has been changed since 3.2. I was looking at the kernel version 2.6.32.43 (which corresponds to the dom0 kernel) where the problem is happening. 

-Anirban

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-23  6:22         ` annie li
@ 2013-09-23 20:32           ` Anirban Chakraborty
  0 siblings, 0 replies; 15+ messages in thread
From: Anirban Chakraborty @ 2013-09-23 20:32 UTC (permalink / raw)
  To: annie li
  Cc: Jason Wang, Wei Liu, <netdev@vger.kernel.org>, Ian Campbell,
	<xen-devel@lists.xen.org>


On Sep 22, 2013, at 11:22 PM, annie li <annie.li@oracle.com> wrote:

> 
> On 2013-9-23 13:02, Jason Wang wrote:
>> On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
>>> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:
>>> 
>>>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>>>> throughput. The following patch makes netfront use GRO API which
>>>>>> improves throughput for that case.
>>>>>> 
>>>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>>>> do GRO and pass it to netfront? What the case that netfront can receive
>>>> The would be the ideal situation. Netback pushes large packets to
>>>> netfront and netfront sees large packets.
>>>> 
>>>>> a MTU size packet, for a card that does not support GRO in host? Doing
>>>> However Anirban saw the case when backend interface receives large
>>>> packets but netfront sees MTU size packets, so my thought is there is
>>>> certain configuration that leads to this issue. As we cannot tell
>>>> users what to enable and what not to enable so I would like to solve
>>>> this within our driver.
>>>> 
>>>>> GRO twice may introduce extra overheads.
>>>>> 
>>>> AIUI if the packet that frontend sees is large already then the GRO path
>>>> is quite short which will not introduce heavy penalty, while on the
>>>> other hand if packet is segmented doing GRO improves throughput.
>>>> 
>>> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
>>> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already.
>>> 
>>> -Anirban
>> In this case, even if you still want to do GRO. It's better to find the
>> root cause of why the GSO packet were segmented
> 
> Totally agree, we need to find the cause why large packets is segmented only in different host case.

It appears (from looking at the netback code) that although GSO is turned on at the netback, the guest receives large packet:
1. if it is a local packet (vm to vm on the same host), in which case netfront does a LRO or,
2. via turning on GRO explicitly (with this patch).

-Anirban

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-21 16:05 [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature Wei Liu
  2013-09-22  6:29 ` [Xen-devel] " Jason Wang
  2013-09-22 14:55 ` Eric Dumazet
@ 2013-09-24 16:30 ` Konrad Rzeszutek Wilk
  2013-09-28 19:38 ` David Miller
  3 siblings, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-09-24 16:30 UTC (permalink / raw)
  To: Wei Liu; +Cc: netdev, Anirban Chakraborty, Ian Campbell, xen-devel

On Sat, Sep 21, 2013 at 05:05:43PM +0100, Wei Liu wrote:
> Anirban was seeing netfront received MTU size packets, which downgraded
> throughput. The following patch makes netfront use GRO API which
> improves throughput for that case.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
> Cc: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  drivers/net/xen-netfront.c |    7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 36808bf..5664165 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -952,7 +952,7 @@ static int handle_incoming_queue(struct net_device *dev,
>  		u64_stats_update_end(&stats->syncp);
>  
>  		/* Pass it up. */
> -		netif_receive_skb(skb);
> +		napi_gro_receive(&np->napi, skb);
>  	}
>  
>  	return packets_dropped;
> @@ -1051,6 +1051,8 @@ err:
>  	if (work_done < budget) {
>  		int more_to_do = 0;
>  
> +		napi_gro_flush(napi, false);
> +
>  		local_irq_save(flags);
>  
>  		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
> @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>  				  NETIF_F_GSO_ROBUST;
> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> +				  NETIF_F_GRO;
>  
>  	/*
>           * Assume that all hw features are available for now. This set
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-21 16:05 [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature Wei Liu
                   ` (2 preceding siblings ...)
  2013-09-24 16:30 ` [Xen-devel] " Konrad Rzeszutek Wilk
@ 2013-09-28 19:38 ` David Miller
  2013-09-30  9:12   ` Ian Campbell
  3 siblings, 1 reply; 15+ messages in thread
From: David Miller @ 2013-09-28 19:38 UTC (permalink / raw)
  To: wei.liu2; +Cc: netdev, xen-devel, abchak, ian.campbell

From: Wei Liu <wei.liu2@citrix.com>
Date: Sat, 21 Sep 2013 17:05:43 +0100

> @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>  				  NETIF_F_GSO_ROBUST;
> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> +				  NETIF_F_GRO;

Please post a new version of this patch with the feedback you've been
given integrated, in particular with this part removed because it is
not necessary.

Ian, please review the patch when Wei posts it.

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-28 19:38 ` David Miller
@ 2013-09-30  9:12   ` Ian Campbell
  2013-09-30 14:43     ` [Xen-devel] " Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 15+ messages in thread
From: Ian Campbell @ 2013-09-30  9:12 UTC (permalink / raw)
  To: David Miller; +Cc: wei.liu2, netdev, xen-devel, abchak

On Sat, 2013-09-28 at 15:38 -0400, David Miller wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Sat, 21 Sep 2013 17:05:43 +0100
> 
> > @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
> >  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >  				  NETIF_F_GSO_ROBUST;
> > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> > +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> > +				  NETIF_F_GRO;
> 
> Please post a new version of this patch with the feedback you've been
> given integrated, in particular with this part removed because it is
> not necessary.
> 
> Ian, please review the patch when Wei posts it.

I will, but note:
        $ ./scripts/get_maintainer.pl -f drivers/net/xen-netfront.c 
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (supporter:XEN HYPERVISOR IN...)
        Jeremy Fitzhardinge <jeremy@goop.org> (supporter:XEN HYPERVISOR IN...)
        xen-devel@lists.xensource.com (moderated list:XEN HYPERVISOR IN...)
        virtualization@lists.linux-foundation.org (open list:XEN HYPERVISOR IN...)
        netdev@vger.kernel.org (open list:NETWORKING DRIVERS)
        linux-kernel@vger.kernel.org (open list)
        
Strictly speaking I maintain netback not front, so Wei please remember
to CC the right people (mainly Konrad) as well as me.

BTW I think this separation is a good thing since it keeps changes to
the protocol "honest". Doesn't matter so much for this particular patch
since don't think it actually touches the protocol.

Ian.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature
  2013-09-30  9:12   ` Ian Campbell
@ 2013-09-30 14:43     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-09-30 14:43 UTC (permalink / raw)
  To: Ian Campbell; +Cc: David Miller, netdev, wei.liu2, abchak, xen-devel

On Mon, Sep 30, 2013 at 10:12:11AM +0100, Ian Campbell wrote:
> On Sat, 2013-09-28 at 15:38 -0400, David Miller wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Sat, 21 Sep 2013 17:05:43 +0100
> > 
> > > @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
> > >  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> > >  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> > >  				  NETIF_F_GSO_ROBUST;
> > > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> > > +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> > > +				  NETIF_F_GRO;
> > 
> > Please post a new version of this patch with the feedback you've been
> > given integrated, in particular with this part removed because it is
> > not necessary.
> > 
> > Ian, please review the patch when Wei posts it.
> 
> I will, but note:
>         $ ./scripts/get_maintainer.pl -f drivers/net/xen-netfront.c 
>         Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (supporter:XEN HYPERVISOR IN...)
>         Jeremy Fitzhardinge <jeremy@goop.org> (supporter:XEN HYPERVISOR IN...)
>         xen-devel@lists.xensource.com (moderated list:XEN HYPERVISOR IN...)
>         virtualization@lists.linux-foundation.org (open list:XEN HYPERVISOR IN...)
>         netdev@vger.kernel.org (open list:NETWORKING DRIVERS)
>         linux-kernel@vger.kernel.org (open list)
>         
> Strictly speaking I maintain netback not front, so Wei please remember
> to CC the right people (mainly Konrad) as well as me.

Which was Acked:

http://mid.gmane.org/20130924163036.GB13979@phenom.dumpdata.com

> 
> BTW I think this separation is a good thing since it keeps changes to
> the protocol "honest". Doesn't matter so much for this particular patch
> since don't think it actually touches the protocol.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2013-09-30 14:43 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-21 16:05 [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature Wei Liu
2013-09-22  6:29 ` [Xen-devel] " Jason Wang
2013-09-22 12:09   ` Wei Liu
2013-09-22 23:04     ` Anirban Chakraborty
2013-09-23  5:02       ` Jason Wang
2013-09-23  6:22         ` annie li
2013-09-23 20:32           ` Anirban Chakraborty
2013-09-22 14:55 ` Eric Dumazet
2013-09-22 23:09   ` Anirban Chakraborty
2013-09-23  5:58     ` Eric Dumazet
2013-09-23 20:27       ` Anirban Chakraborty
2013-09-24 16:30 ` [Xen-devel] " Konrad Rzeszutek Wilk
2013-09-28 19:38 ` David Miller
2013-09-30  9:12   ` Ian Campbell
2013-09-30 14:43     ` [Xen-devel] " Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).