From mboxrd@z Thu Jan 1 00:00:00 1970 From: "John W. Linville" Subject: Re: [PATCH v1 1/6] net: Generalize udp based tunnel offload Date: Tue, 1 Dec 2015 11:08:42 -0500 Message-ID: <20151201160841.GG29497@tuxdriver.com> References: <1448312579-159544-1-git-send-email-anjali.singhai@intel.com> <1448312579-159544-2-git-send-email-anjali.singhai@intel.com> <20151129.222138.1582847465760563254.davem@davemloft.net> <20151201154445.GF29497@tuxdriver.com> <1448984968.3382143.454794705.68D88B7D@webmail.messagingengine.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Tom Herbert , Jesse Gross , David Miller , Anjali Singhai Jain , Linux Kernel Network Developers , Kiran Patil To: Hannes Frederic Sowa Return-path: Received: from charlotte.tuxdriver.com ([70.61.120.58]:56853 "EHLO smtp.tuxdriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754703AbbLAQPQ (ORCPT ); Tue, 1 Dec 2015 11:15:16 -0500 Content-Disposition: inline In-Reply-To: <1448984968.3382143.454794705.68D88B7D@webmail.messagingengine.com> Sender: netdev-owner@vger.kernel.org List-ID: On Tue, Dec 01, 2015 at 04:49:28PM +0100, Hannes Frederic Sowa wrote: > On Tue, Dec 1, 2015, at 16:44, John W. Linville wrote: > > On Mon, Nov 30, 2015 at 09:26:51PM -0800, Tom Herbert wrote: > > > On Mon, Nov 30, 2015 at 5:28 PM, Jesse Gross wrote: > > > > > > Based on what we can do today, I see only two real choices: do some > > > > refactoring to clean up the stack a bit or remove the existing VXLAN > > > > offloading altogether. I think this series is trying to do the former > > > > and the result is that the stack is cleaner after than before. That > > > > seems like a good thing. > > > > > > There is a third choice which is to do nothing. Creating an > > > infrastructure that claims to "Generalize udp based tunnel offload" > > > but actually doesn't generalize the mechanism is nothing more than > > > window dressing-- this does nothing to help with the VXLAN to > > > VXLAN-GPE transition for instance. If geneve specific offload is > > > really needed now then that can be should with another ndo function, > > > or alternatively ntuple filter with a device specific action would at > > > least get the stack out of needing to be concerned with that. > > > Regardless, we will work optimize the rest of the stack for devices > > > that implement protocol agnostic mechanisms. > > > > Is there no concern about NDO proliferation? Does the size of the > > netdev_ops structure matter? Beyond that, I can see how a single > > entry point with an enum specifying the offload type isn't really any > > different in the grand scheme of things than having multiple NDOs, > > one per offload. > > > > Given the need to live with existing hardware offloads, I would lean > > toward a consolidated NDO. But if a different NDO per tunnel type is > > preferred, I can be satisified with that. > > Having per-offloading NDOs helps the stack to gather further information > what kind of offloads the driver has even maybe without trying to call > down into the layer (just by comparing to NULL). Checking this inside > the driver offload function clearly does not have this feature. So we > finally can have "ip tunnel please-recommend-type" feature. :) That is a valuable insight! Maybe the per-offload NDO isn't such a bad idea afterall... :-) John -- John W. Linville Someday the world will need a hero, and you linville@tuxdriver.com might be all we have. Be ready.