Netdev List
 help / color / mirror / Atom feed
From: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>,
	 Willem de Bruijn <willemdebruijn.kernel@gmail.com>
Cc: "Jakub Kicinski" <kuba@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"John Fastabend" <john.fastabend@gmail.com>,
	"Andrii Nakryiko" <andrii@kernel.org>,
	"Maciej Fijalkowski" <maciej.fijalkowski@intel.com>,
	"Stanislav Fomichev" <sdf@fomichev.me>,
	"Magnus Karlsson" <magnus.karlsson@intel.com>,
	nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp)
Date: Thu, 21 Nov 2024 10:43:12 -0500	[thread overview]
Message-ID: <673f55109d49_bb6d229431@willemb.c.googlers.com.notmuch> (raw)
In-Reply-To: <6af7f16f-2ce4-4584-a7dc-47116158d47a@intel.com>

Alexander Lobakin wrote:
> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> Date: Tue, 19 Nov 2024 10:14:28 -0500
> 
> > Alexander Lobakin wrote:
> >> From: Willem De Bruijn <willemdebruijn.kernel@gmail.com>
> >> Date: Sat, 16 Nov 2024 10:31:08 -0500
> 
> [...]
> 
> >> libeth_xdp depends on every patch from the series. I don't know why you
> >> believe this might anyhow move faster. Almost the whole series got
> >> reviewed relatively quickly, except drivers/intel folder which people
> >> often tend to avoid.
> > 
> > Smaller focused series might have been merged already.
> 
> Half of this series merged wouldn't change that the whole set wouldn't
> fit into one window (which is what you want). Half of this series merged
> wouldn't allow sending idpf XDP parts.

I meant that smaller series are easier to facilitate feedback and
iterate on quickly. So multiple focused series can make the same
window.
 
> >  
> >> I remind you that the initial libeth + iavf series (11 patches) was
> >> baking on LKML for one year. Here 2 Chapters went into the kernel within
> >> 2 windows and only this one (clearly much bigger than the previous ones
> >> and containing only generic changes in contrary to the previous which
> >> had only /intel code) didn't follow this rule, which doesn't
> >> unnecessarily mean it will stuck for too long.
> >>
> >> (+ I clearly mentioned several times that Chapter III will take longer
> >>  than the rest and each time you had no issues with that)
> > 
> > This is a misunderstanding. I need a working feature, on a predictable
> > timeline, in distro kernels.
> 
> Predictable timeline is not about upstream. At least when it comes to
> series which introduce a lot of generic changes / additions.
> A good example is PFCP offload in ice, the initial support was done and
> sent spring 2022, then it took almost 2 years until it landed into the
> kernel. The first series was of good quality, but there'll always be
> discussions, different opinions etc.
> 
> I've no idea what misunderstanding are you talking about, I quoted what
> Oregon told me quoting you. The email I sent with per-patch breakdown
> why none of them can be tossed off to upstream XDP for idpf, you seemed
> to ignore, at least I haven't seen any reply. I've no idea what they
> promise you each kernel release, but I haven't promised anything except
> sending first working RFC by the end of 2023, which was done back then;
> because promising that feature X will definitely land into upstream
> release Y would mean lying. There's always risk even a small series can
> easily miss 1-3 kernel releases.
> Take a look at Amit's comment. It involves additional work which I
> didn't expect. I'm planning to do it while the window is closed as the
> suggestion is perfectly valid and I don't have any arguments against.
> Feel free to go there and argue that the comment is not valid because
> you want the series merged ASAP, if you think that this "argument" works
> upstream.
> 
> > 
> >>>
> >>> The first 3 patches are not essential to IDFP XDP + AF_XDP either.
> >>
> >> You don't seem to read the code. libeth_xdp won't even build without them.
> > 
> > Not as written, no, obviously.
> 
> If you want to compare with the OOT implementation for the 10th time,
> let me remind you that it differs from the upstream version of idpf a
> ton. OOT driver still doesn't use Page Pool (without which idpf wouldn't
> have been accepted upstream at all), for example, which automatically
> drops the dependency from several big patches from this series. OOT
> implementation performs X times worse than the upstream ice. It still
> forces header split to be turned off when XDP prog is installed. It
> still uses hardcoded Rx buffer sizes. I can continue enumerating things
> from OOT unacceptable here in upstream forever.

I was not referring to the OOT series. See below.

> > 
> >> I don't believe the model taken by some developers (not spelling names
> >> loud) "let's submit minimal changes and almost draft code, I promise
> >> I'll create a todo list and will be polishing it within next x years"
> >> works at all, not speaking that it may work better than sending polished
> >> mature code (I hope it is).
> >>
> >>> The IDPF feature does not have to not depend on them.
> >>>
> >>> Does not matter for upstream, but for the purpose of backporting this
> >>> to distro kernels, it helps if the driver feature minimizes dependency
> >>> on core kernel API changes. If patch 19 can be made to work without
> >>
> >> OOT style of thinking.
> >> Minimizing core changes == artificial self-limiting optimization and
> >> functionality potential.
> >> New kernels > LTSes and especially custom kernels which receive
> >> non-upstream (== not officially supported by the community) feature
> >> backports. Upstream shouldn't sacrifice anything in favor of those, this
> >> way we end up one day sacrificing stuff for out-of-tree drivers (which I
> >> know some people already try to do).
> > 
> > Opinionated positions. Nice if you have unlimited time.
> 
> I clearly remember Kuba's position that he wants good quality of
> networking core and driver code. I'm pretty sure every netdev maintainer
> has the same position. Again, feel free to argue with them, saying they
> must take whatever trash is sent to LKML because customer X wants it
> backported to his custom kernel Y ASAP.

Not asking for massive changes, just suggesting a different patch
order. That does not affect code quality.

The core feature set does not depend on loop unrolling, constification,
removal of xdp_frame::mem.id, etcetera. These can probably be reviewed
and merged more quickly independent from this driver change, even.

Within IDPF, same for for per queue(set) link up/down and chunked
virtchannel messages. A deeper analysis can probably carve out
other changes not critical to landing XDP/XSK (sw marker removal).

> > 
> >>> some of the changes in 1..18, that makes it more robust from that PoV.
> >>
> >> No it can't, I thought people first read the code and only then comment,
> >> otherwise it's just wasting time.
> 
> Thanks,
> Olek



  reply	other threads:[~2024-11-21 15:43 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-13 15:24 [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp) Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 01/19] jump_label: export static_key_slow_{inc,dec}_cpuslocked() Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 02/19] skbuff: allow 2-4-argument skb_frag_dma_map() Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 03/19] unroll: add generic loop unroll helpers Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 04/19] bpf, xdp: constify some bpf_prog * function arguments Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 05/19] xdp, xsk: constify read-only arguments of some static inline helpers Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 06/19] xdp: allow attaching already registered memory model to xdp_rxq_info Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 07/19] xdp: register system page pool as an XDP memory model Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 08/19] page_pool: make page_pool_put_page_bulk() actually handle array of pages Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 09/19] page_pool: allow mixing PPs within one bulk Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 10/19] xdp: get rid of xdp_frame::mem.id Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 11/19] xdp: add generic xdp_buff_add_frag() Alexander Lobakin
2024-11-14 14:07   ` Ido Schimmel
2024-11-16  2:40   ` Jakub Kicinski
2024-11-19 12:03     ` Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 12/19] xdp: add generic xdp_build_skb_from_buff() Alexander Lobakin
2024-11-14 15:06   ` Ido Schimmel
2024-11-14 15:16     ` Ido Schimmel
2024-11-15 14:34       ` Alexander Lobakin
2024-11-17 12:42         ` Amit Cohen
2024-11-19 12:05           ` Alexander Lobakin
2024-11-26 16:38           ` Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 13/19] xsk: align &xdp_buff_xsk harder Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 14/19] xsk: allow attaching XSk pool via xdp_rxq_info_reg_mem_model() Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 15/19] xsk: make xsk_buff_add_frag really add a frag via __xdp_buff_add_frag() Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 16/19] xsk: add generic XSk &xdp_buff -> skb conversion Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 17/19] xsk: add helper to get &xdp_desc's DMA and meta pointer in one go Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 18/19] libeth: support native XDP and register memory model Alexander Lobakin
2024-11-13 15:24 ` [PATCH net-next v5 19/19] libeth: add a couple of XDP helpers (libeth_xdp) Alexander Lobakin
2024-11-16  2:43 ` [PATCH net-next v5 00/19] xdp: a fistful of generic changes (+libeth_xdp) Jakub Kicinski
2024-11-16 15:31   ` Willem de Bruijn
2024-11-19 12:28     ` Alexander Lobakin
2024-11-19 15:14       ` Willem de Bruijn
2024-11-20 15:23         ` Alexander Lobakin
2024-11-21 15:43           ` Willem de Bruijn [this message]
2024-11-21 18:02             ` Alexander Lobakin
2024-11-21 18:42               ` Willem de Bruijn
2024-11-19 12:06   ` Alexander Lobakin
2024-11-19 14:25     ` Jakub Kicinski
2024-11-20 14:40       ` Alexander Lobakin
2024-11-21 19:26         ` Jacob Keller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=673f55109d49_bb6d229431@willemb.c.googlers.com.notmuch \
    --to=willemdebruijn.kernel@gmail.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nex.sw.ncis.osdt.itp.upstreaming@intel.com \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox