From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Miller Subject: Re: [PATCH net-next 0/2] allow setting gso_maximum values Date: Mon, 04 Dec 2017 10:40:02 -0500 (EST) Message-ID: <20171204.104002.1311507295275740357.davem@davemloft.net> References: <20171201201158.25594-1-sthemmin@microsoft.com> <20171201153001.4170f55d@xeon-e3> Mime-Version: 1.0 Content-Type: Text/Plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: netdev@vger.kernel.org, sthemmin@microsoft.com To: stephen@networkplumber.org Return-path: Received: from shards.monkeyblade.net ([184.105.139.130]:48738 "EHLO shards.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751033AbdLDPkD (ORCPT ); Mon, 4 Dec 2017 10:40:03 -0500 In-Reply-To: <20171201153001.4170f55d@xeon-e3> Sender: netdev-owner@vger.kernel.org List-ID: From: Stephen Hemminger Date: Fri, 1 Dec 2017 15:30:01 -0800 > On Fri, 1 Dec 2017 12:11:56 -0800 > Stephen Hemminger wrote: > >> This is another way of addressing the GSO maximum performance issues for >> containers on Azure. What happens is that the underlying infrastructure uses >> a overlay network such that GSO packets over 64K - vlan header end up cause >> either guest or host to have do expensive software copy and fragmentation. >> >> The netvsc driver reports GSO maximum settings correctly, the issue >> is that containers on veth devices still have the larger settings. >> One solution that was examined was propogating the values back >> through the bridge device, but this does not work for cases where >> virtual container network is done on L3. >> >> This patch set punts the problem to the orchestration layer that sets >> up the container network. It also enables other virtual devices >> to have configurable settings for GSO maximum. >> >> Stephen Hemminger (2): >> rtnetlink: allow GSO maximums to be passed to device >> veth: allow configuring GSO maximums >> >> drivers/net/veth.c | 20 ++++++++++++++++++++ >> net/core/rtnetlink.c | 2 ++ >> 2 files changed, 22 insertions(+) >> > > I would like a confirmation from Intel that is doing Docker testing > that this works for them before merging. Like David Ahern, I think you should allow this net netlink setting during changelink as well as newlink. Thanks.