netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
To: Harald Welte <laforge@gnumonks.org>
Cc: Alexander Lobakin <aleksander.lobakin@intel.com>,
	Wojciech Drewek <wojciech.drewek@intel.com>,
	netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
	linux-kernel@vger.kernel.org, David Miller <davem@davemloft.net>
Subject: Re: [Intel-wired-lan] [PATCH net-next v6 00/21] ice: add PFCP filter support
Date: Fri, 17 May 2024 16:01:24 +0200	[thread overview]
Message-ID: <ZkdjNGlxU6hAp9cc@mev-dev> (raw)
In-Reply-To: <ZkZ62F_iCCOf4nmM@nataraja>

On Thu, May 16, 2024 at 11:30:00PM +0200, Harald Welte wrote:
> Hi Michal,
> 
> thanks for your response.
> 
> On Thu, May 16, 2024 at 12:44:13PM +0200, Michal Swiatkowski wrote:
> 
> > > I'm curious to understand why are *pfcp* packets hardware offloaded?
> > > PFCP is just the control plane, similar to you can consider netlink the
> > > control plane by which userspace programs control the data plane.
> > > 
> > > I can fully understand that GTP-U packets are offloaded to kernel space or
> > > hardware, and that then some control plane mechanism like PFCP is needed
> > > to control that data plane.  But offloading packets of that control
> > > protocol?
> > 
> > It is hard for me to answer your concerns, because oposite to you, I
> > don't have any experience with telco implementations. We had client that
> > want to add offload rule for PFCP in the same way as for GTP. 
> 
> I've meanwhile done some reading and it seems there are indeed some
> papers suggesting that in specific implementations of control/data plane
> splits, the transaction rate between control and data plane (to set up /
> tear down / modify tunnels) is to low.  As a work-around, the entire
> PFCP parsing is then off-loaded into (e.g. P4 capable) hardware.
> 
> So it seems at least there appears to be equipment where PFCP offloading
> is useful to significantly incresae performance.
> 
> For those curious, https://faculty.iiitd.ac.in/~rinku/resources/slides/2022-sosr-accelupf-slides.pdf
> seems to cover one such configuration where offloading processing the
> control-plane protocol into the P4 hardware switch has massively
> improved the previously poor PFCP processing rate.

Thanks for the interesting link.

> 
> > Intel
> > hardware support matching on specific PFCP packet parts. We spent some
> > time looking at possible implementations. As you said, it is a little
> > odd to follow the same scheme for GTP and PFCP, but it look for me like
> > reasonable solution.
> 
> Based on what I understand, I am not sure I would agree with the
> "reasonable solution" part.  But then of course, it is a subjective
> point of view.
> 
> I understand and appreciate the high-level goal of giving the user some
> way to configure a specific feature of an intel NIC.
> 
> However, I really don't think that this should come at the expense of
> introducing tunnel devices (or other net-devs) for things that are not
> tunnels, and by calling things PFCP whcih are not an implementation of
> PFCP.
> 
> Tou are introducing something called "pfcp" into the kernel,
> whcih is not pfcp.  What if somebody else at some point wanted to
> introduce some actual PFCP support in some form?  How should they call
> their sub-systems / Kconfigs / etc?  They could no longer call it simply
> "pfcp" as you already used this very generic term for (from the
> perspective of PFCP) a specific niche use case of configuring a NIC to
> handle all of PFCP.

What about changing the name and the description to better reflect what
this new device is for? I don't have any idea for good fitting name.

> 
> > Do you have better idea for that?
> 
> I am not familiar with the use cases and the intel NICs and what kind of
> tooling or third party software might be out there wanting to configure
> it.  It's really not my domain of expertise and as such I have no
> specific proposal, sorry.
> 
> It just seems mind-boggling to me that we would introduce
> * a net-device for sometthing that's not a net-device
> * a tunnel for something that does no tunneling whatsoeer
> * code mentioning "PFCP encapsulated traffic" when in fact it is
>   impossible to encapsulate any traffic in PFCP, and the code does
>   not - to the best of my understanding - do any such encapsulation
>   or even configure hardware to perform any such non-existant PFCP
>   encapsulation
> 
> [and possibly more] *just* so that a user can use 'tc' to configure a
> hardware offloading feature in their NIC.
> 
> IMHO, there must be a better way.

Yeah, let's say we didn't find the better way. I agree with you that it
isn't the best solution. The problem is that we couldn't have found the
better.

In short, if I remember correctly (because we sent first RFC more than
one year ago), to be sure that the packet is PFCP and has PFCP specific
fields to match, the UDP port needs to be checked. Currently any matching
for UDP port in flower kernel code is to match tunnel. Because of that
we treated it as a tunnel, only for simplification of this matching.

Why we need specific net-device is because of this simplification. All
tunnels have their net-devices, so in tc code the decision if specific
fields from the protocol can be matched is done based on net-device
type. If PFCP will have specific ip proto (as example) ther won't be any
problem, but as specific is only UDP port we end up in this messy
solution.

> 
> > > I also see the following in the patch:
> > > 
> > > > +MODULE_DESCRIPTION("Interface driver for PFCP encapsulated traffic");
> > > 
> > > PFCP is not an encapsulation protocol for user plane traffic.  It is not
> > > a tunneling protocol.  GTP-U is the tunneling protocol, whose
> > > implementations (typically UPFs) are remote-controlled by PFCP.
> > > 
> > 
> > Agree, it is done like that to simplify implementation and reuse kernel
> > software stack.
> 
> My comment was about the module description.  It claims something that
> makes no sense and that it does not actually implement.  Unless I'm not
> understanding what the code does, it is outright wrong and misleading.
> 
> Likewise is the comment on top of the drivers/net/pfcp.c file says:
> 
> > * PFCP according to 3GPP TS 29.244
> 
> While the code is in fact *not* implementing any part of TS 29.244.  The
> code is using the udp_tunnel infrastructure for something that's not a
> tunnel.  From what i can tell it is creating an in-kernel UDP socket
> (whcih can be done without any relation to udp_tunnel) and it is
> configuring hardware offload features in a NIC.  The fact that the
> payload of those UDP packets may be PFCP is circumstancial - nothing in
> the code actually implements even the tiniest bit of PFCP protocol
> parsing.
> 

Like I said it is only for tc to allow matching on the PFCP specific
fields.

> > This is the way to allow user to steer PFCP packet based on specific
> > opts (type and seid) using tc flower. If you have better solution for
> > that I will probably agree with you and will be willing to help you
> > with better implementation.
> 
> sadly, neither tc nor intel NIC hardware offload capabilities are my
> domain of expertise, so I am unable to propose detials of a better
> solution.
> 
> > I assume the biggest problem here is with treating PFCP as a tunnel
> > (done for simplification and reuse) and lack of any functionality of
> > PFCP device
> 
> I don't think the kernel should introduce a net-device (here
> specifically a tunnel device) for something that is not a tunnel.  This
> device [nor the hardware accelerator it controls] will never encapsulate
> or decapsulate any PFCP packet, as PFCP is not a tunnel / encapsulation
> protocol.
> 

Good point, I hope to treate PFCP as only one exception and even remove
it when we have better solution for allowing tc to match PFCP packets.

> > (moving the PFCP specification implementation into kernel
> > probably isn't good idea and may never be accepted).
> 
> I would agree, but that is a completely separate discussion.  Even if
> one ignores this, the hypothetical kernel PFCP implementation would not
> be a tunnel, and not be a net-device at all.  It would simply be some
> kind of in-kernel UDP socket parsing and responding to packets, similar
> to the in-kernel nfs daemon or whatever other in-kernel UDP users exist.
> 
> Also: The fact that you or we think an actual PFCP implementation would
> not be accepted in the kernel should *not* be an argument in favor of
> accepting something else into the kernel, call it PFCP and create tunnel
> devices for things that are not tunnels :)
> 
> > Offloading doesn't sound like problematic. If there is a user that want
> > to use that (to offload for better performance, or even to steer between
> > VFs based on PFCP specific parts) why not allow to do that?
> 
> The papers I read convinced me that there is a use case.  I very much
> believe the use case is a work-around for a different problem (the
> inefficiency of the control->data plance protocol in this case), but
> my opinion on that doesn't matter here.  I do agree with you that there
> are apparently people who would want to make use of such a feature, and
> that there is nothing wrong with provoding them means to do this.
> 
> However, the *how* is what I strongly object to.  Once again, I may miss
> some part of your architecture, and I am happy to be proven otherwise.
> 

It will be hard. As you said it is done "just to configure hardware
offloading". Can't it be an argument here?

> But if all of this is *just* to configure hardware offloading in a nic,
> I don't think there should be net-devices or tunnels that never
> encap/decap a single packet or for protocols / use cases that clearly do
> not encap or decap packets.
> 
> I also think this sets very bad precedent.  What about other prootocols
> in the future?  Will we see new tunnels in the kernel for things that
> are not tunnels at all, every time there is some new protocol that gains
> hardware offloading capability in some NIC? Surely this kind of
> proliferation is not the kind of software architecture we want to have?
> 
> Once again, I do apologize for raising my concerns at such a late stage.
> I am not a kernel developer anymore these days, and I do not follow any
> of the related mailing lists.  It was pure coincidence that the net-next
> merge of some GTP improvements I was involved in specifying also
> contained the PFCP code and I started digging what this was all about.
> 

No problem, sorry for no CCing you during all this revision of this
changes.

Probably I can't convince you. We agree that the solution with
additional netdev and treating PFCP as a tunnel is ugly solution. It is
done only to allow tc flower matching (in hardware with fallback to
software). Currently I don't have any idea for more suitable solution.

I willing to work on better idea if anybody have any.

> Regards,
> 	Harald
> 

Thanks,
Michal

> -- 
> - Harald Welte <laforge@gnumonks.org>          https://laforge.gnumonks.org/
> ============================================================================
> "Privacy in residential applications is a desirable marketing option."
>                                                   (ETSI EN 300 175-7 Ch. A6)

      reply	other threads:[~2024-05-17 14:02 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-27 15:23 [PATCH net-next v6 00/21] ice: add PFCP filter support Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 01/21] lib/bitmap: add bitmap_{read,write}() Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 02/21] lib/test_bitmap: add tests for bitmap_{read,write}() Alexander Lobakin
2024-03-27 15:47   ` Andy Shevchenko
2024-03-27 16:49     ` Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 03/21] lib/test_bitmap: use pr_info() for non-error messages Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 04/21] bitops: add missing prototype check Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 05/21] bitops: make BYTES_TO_BITS() treewide-available Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 06/21] bitops: let the compiler optimize {__,}assign_bit() Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 07/21] linkmode: convert linkmode_{test,set,clear,mod}_bit() to macros Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 08/21] s390/cio: rename bitmap_size() -> idset_bitmap_size() Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 09/21] fs/ntfs3: add prefix to bitmap_size() and use BITS_TO_U64() Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 10/21] btrfs: rename bitmap_set_bits() -> btrfs_bitmap_set_bits() Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 11/21] tools: move alignment-related macros to new <linux/align.h> Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 12/21] bitmap: introduce generic optimized bitmap_size() Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 13/21] bitmap: make bitmap_{get,set}_value8() use bitmap_{read,write}() Alexander Lobakin
2024-05-29 15:12   ` Robin Murphy
2024-05-30 17:11     ` Yury Norov
2024-05-30 17:50       ` Robin Murphy
2024-03-27 15:23 ` [PATCH net-next v6 14/21] lib/bitmap: add compile-time test for __assign_bit() optimization Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 15/21] ip_tunnel: use a separate struct to store tunnel params in the kernel Alexander Lobakin
2024-04-04 14:24   ` Dan Carpenter
2024-04-04 15:47     ` Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 16/21] ip_tunnel: convert __be16 tunnel flags to bitmaps Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 17/21] net: net_test: add tests for IP tunnel flags conversion helpers Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 18/21] pfcp: add PFCP module Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 19/21] pfcp: always set pfcp metadata Alexander Lobakin
2024-04-03 20:59   ` Arnd Bergmann
2024-04-04  9:45     ` Michal Swiatkowski
2024-04-04  9:56       ` Arnd Bergmann
2024-04-04 10:12         ` [Intel-wired-lan] " Michal Swiatkowski
2024-03-27 15:23 ` [PATCH net-next v6 20/21] ice: refactor ICE_TC_FLWR_FIELD_ENC_OPTS Alexander Lobakin
2024-03-27 15:23 ` [PATCH net-next v6 21/21] ice: Add support for PFCP hardware offload in switchdev Alexander Lobakin
2024-04-01 10:00 ` [PATCH net-next v6 00/21] ice: add PFCP filter support patchwork-bot+netdevbpf
2024-05-15 11:55 ` Harald Welte
2024-05-16 10:44   ` [Intel-wired-lan] " Michal Swiatkowski
2024-05-16 21:30     ` Harald Welte
2024-05-17 14:01       ` Michal Swiatkowski [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZkdjNGlxU6hAp9cc@mev-dev \
    --to=michal.swiatkowski@linux.intel.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=davem@davemloft.net \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=laforge@gnumonks.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=wojciech.drewek@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).