netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC] veth: make veth aware of gso buffer size
@ 2017-11-25 21:26 Solio Sarabia
  2017-11-26 17:26 ` Stephen Hemminger
  0 siblings, 1 reply; 2+ messages in thread
From: Solio Sarabia @ 2017-11-25 21:26 UTC (permalink / raw)
  To: netdev, davem, stephen, eric.dumazet, dsahern
  Cc: kys, shiny.sebastian, solio.sarabia, linux-kernel

GSO buffer size supported by underlying devices is not propagated to
veth. In high-speed connections with hw TSO enabled, veth sends buffers
bigger than lower device's maximum GSO, forcing sw TSO and increasing
system CPU usage.

Signed-off-by: Solio Sarabia <solio.sarabia@intel.com>
---
Exposing gso_max_size via sysfs is not advised [0]. This patch queries
available interfaces get this value. Reading dev_list is O(n), since it
can be large (e.g. hundreds of containers), only a subset of interfaces
is inspected.  _Please_ advise pointers how to make veth aware of lower
device's GSO value.

In a test scenario with Hyper-V, Ubuntu VM, Docker inside VM, and NTttcp
microworkload sending 40 Gbps from one container, this fix reduces 3x
sender host CPU overhead, since now all TSO is done on physical NIC.
Savings in CPU cycles benefit other use cases where veth is used, and
the GSO buffer size is properly set.

[0] https://lkml.org/lkml/2017/11/24/512

 drivers/net/veth.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index f5438d0..e255b51 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -298,6 +298,34 @@ static const struct net_device_ops veth_netdev_ops = {
 		       NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | \
 		       NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_STAG_RX )
 
+static void veth_set_gso(struct net_device *dev)
+{
+	struct net_device *nd;
+	unsigned int size = GSO_MAX_SIZE;
+	u16 segs = GSO_MAX_SEGS;
+	unsigned int count = 0;
+	const unsigned int limit = 10;
+
+	/* Set default gso based on available physical/synthetic devices,
+	 * ignore virtual interfaces, and limit looping through dev_list
+	 * as the total number of interfaces can be large.
+	 */
+	read_lock(&dev_base_lock);
+	for_each_netdev(&init_net, nd) {
+		if (count >= limit)
+			break;
+		if (nd->dev.parent && nd->flags & IFF_UP) {
+			size = min(size, nd->gso_max_size);
+			segs = min(segs, nd->gso_max_segs);
+		}
+		count++;
+	}
+
+	read_unlock(&dev_base_lock);
+	netif_set_gso_max_size(dev, size);
+	dev->gso_max_segs = size ? size - 1 : 0;
+}
+
 static void veth_setup(struct net_device *dev)
 {
 	ether_setup(dev);
@@ -323,6 +351,8 @@ static void veth_setup(struct net_device *dev)
 	dev->hw_features = VETH_FEATURES;
 	dev->hw_enc_features = VETH_FEATURES;
 	dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE;
+
+	veth_set_gso(dev);
 }
 
 /*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH RFC] veth: make veth aware of gso buffer size
  2017-11-25 21:26 [PATCH RFC] veth: make veth aware of gso buffer size Solio Sarabia
@ 2017-11-26 17:26 ` Stephen Hemminger
  0 siblings, 0 replies; 2+ messages in thread
From: Stephen Hemminger @ 2017-11-26 17:26 UTC (permalink / raw)
  To: Solio Sarabia
  Cc: netdev, davem, eric.dumazet, dsahern, kys, shiny.sebastian,
	linux-kernel

On Sat, 25 Nov 2017 13:26:52 -0800
Solio Sarabia <solio.sarabia@intel.com> wrote:

> GSO buffer size supported by underlying devices is not propagated to
> veth. In high-speed connections with hw TSO enabled, veth sends buffers
> bigger than lower device's maximum GSO, forcing sw TSO and increasing
> system CPU usage.
> 
> Signed-off-by: Solio Sarabia <solio.sarabia@intel.com>
> ---
> Exposing gso_max_size via sysfs is not advised [0]. This patch queries
> available interfaces get this value. Reading dev_list is O(n), since it
> can be large (e.g. hundreds of containers), only a subset of interfaces
> is inspected.  _Please_ advise pointers how to make veth aware of lower
> device's GSO value.
> 
> In a test scenario with Hyper-V, Ubuntu VM, Docker inside VM, and NTttcp
> microworkload sending 40 Gbps from one container, this fix reduces 3x
> sender host CPU overhead, since now all TSO is done on physical NIC.
> Savings in CPU cycles benefit other use cases where veth is used, and
> the GSO buffer size is properly set.
> 
> [0] https://lkml.org/lkml/2017/11/24/512
> 
>  drivers/net/veth.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index f5438d0..e255b51 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -298,6 +298,34 @@ static const struct net_device_ops veth_netdev_ops = {
>  		       NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | \
>  		       NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_STAG_RX )
>  
> +static void veth_set_gso(struct net_device *dev)
> +{
> +	struct net_device *nd;
> +	unsigned int size = GSO_MAX_SIZE;
> +	u16 segs = GSO_MAX_SEGS;
> +	unsigned int count = 0;
> +	const unsigned int limit = 10;
> +
> +	/* Set default gso based on available physical/synthetic devices,
> +	 * ignore virtual interfaces, and limit looping through dev_list
> +	 * as the total number of interfaces can be large.
> +	 */
> +	read_lock(&dev_base_lock);
> +	for_each_netdev(&init_net, nd) {
> +		if (count >= limit)
> +			break;
> +		if (nd->dev.parent && nd->flags & IFF_UP) {
> +			size = min(size, nd->gso_max_size);
> +			segs = min(segs, nd->gso_max_segs);
> +		}
> +		count++;
> +	}
> +
> +	read_unlock(&dev_base_lock);
> +	netif_set_gso_max_size(dev, size);
> +	dev->gso_max_segs = size ? size - 1 : 0;
> +}

Thanks for looking for a solution. 

Looking at the first 10 devices (including those not related to veth) is not that
great a method. There maybe 100's of tunnels, and there is no guarantee of ordering
in the device list. And what about network namespaces, looking in root namespace
is suspect as well.

The locking also looks wrong. veth_setup is called with RTNL held
(from __rtnl_link_register). Therefore acquiring dev_base_lock is not necessary.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-11-26 17:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-25 21:26 [PATCH RFC] veth: make veth aware of gso buffer size Solio Sarabia
2017-11-26 17:26 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).