From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Hemminger Subject: Re: [PATCH RFC 2/2] veth: propagate bridge GSO to peer Date: Fri, 1 Dec 2017 12:30:42 -0800 Message-ID: <20171201123042.4d565c6f@xeon-e3> References: <20171126181749.19288-1-sthemmin@microsoft.com> <20171126181749.19288-3-sthemmin@microsoft.com> <20171126230725.1fcc3b51@xeon-e3> <20171127201419.GA79@intel.com> <20171127131502.1fbfaa66@xeon-e3> <20171128014222.GA503@intel.com> <91628267-2e48-a231-7cc2-4830eb95ceef@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: Solio Sarabia , davem@davemloft.net, netdev@vger.kernel.org, sthemmin@microsoft.com To: David Ahern Return-path: Received: from mail-pl0-f47.google.com ([209.85.160.47]:40027 "EHLO mail-pl0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751220AbdLAUaz (ORCPT ); Fri, 1 Dec 2017 15:30:55 -0500 Received: by mail-pl0-f47.google.com with SMTP id 1so6894531pla.7 for ; Fri, 01 Dec 2017 12:30:55 -0800 (PST) In-Reply-To: <91628267-2e48-a231-7cc2-4830eb95ceef@gmail.com> Sender: netdev-owner@vger.kernel.org List-ID: On Mon, 27 Nov 2017 19:02:01 -0700 David Ahern wrote: > On 11/27/17 6:42 PM, Solio Sarabia wrote: > > On Mon, Nov 27, 2017 at 01:15:02PM -0800, Stephen Hemminger wrote: > >> On Mon, 27 Nov 2017 12:14:19 -0800 > >> Solio Sarabia wrote: > >> > >>> On Sun, Nov 26, 2017 at 11:07:25PM -0800, Stephen Hemminger wrote: > >>>> On Sun, 26 Nov 2017 20:13:39 -0700 > >>>> David Ahern wrote: > >>>> > >>>>> On 11/26/17 11:17 AM, Stephen Hemminger wrote: > >>>>>> This allows veth device in containers to see the GSO maximum > >>>>>> settings of the actual device being used for output. > >>>>> > >>>>> veth devices can be added to a VRF instead of a bridge, and I do not > >>>>> believe the gso propagation works for L3 master devices. > >>>>> > >>>>> From a quick grep, team devices do not appear to handle gso changes either. > >>>> > >>>> This code should still work correctly, but no optimization would happen. > >>>> The gso_max_size of the VRF or team will > >>>> still be GSO_MAX_SIZE so there would be no change. If VRF or Team ever got smart > >>>> enough to handle GSO limits, then the algorithm would handle it. > >>> > >>> This patch propagates gso value from bridge to its veth endpoints. > >>> However, since bridge is never aware of the GSO limit from underlying > >>> interfaces, bridge/veth still have larger GSO size. > >>> > >>> In the docker case, bridge is not linked directly to physical or > >>> synthetic interfaces; it relies on iptables to decide which interface to > >>> forward packets to. > >> > >> So for the docker case, then direct control of GSO values via netlink (ie ip link set) > >> seems like the better solution. > > > > Adding ioctl support for 'ip link set' would work. I'm still concerned > > how to enforce the upper limit to not exceed that of the lower devices. > > > > Consider a system with three NICs, each reporting values in the range > > [60,000 - 62,780]. Users could set virtual interfaces' gso to 65,536, > > exceeding the limit, and having the host do sw gso (vms settings must > > not affect host performance.) > > > > Looping through interfaces? With the difference that now it'd be > > trigger upon user's request, not every time a veth is created (like one > > previous patch discussed.) > > > > You are concerned about the routed case right? One option is to have VRF > devices propagate gso sizes to all devices (veth, vlan, etc) enslaved to > it. VRF devices are Layer 3 master devices so an L3 parallel to a bridge. See the patch set I posted today which punts the problem to veth setup.