From: Michal Kubiak <michal.kubiak@intel.com>
To: Jesse Brandeburg <jbrandeburg@cloudflare.com>
Cc: <intel-wired-lan@lists.osuosl.org>,
<maciej.fijalkowski@intel.com>, <aleksander.lobakin@intel.com>,
<przemyslaw.kitszel@intel.com>,
<dawid.osuchowski@linux.intel.com>, <jacob.e.keller@intel.com>,
<netdev@vger.kernel.org>, <kernel-team@cloudflare.com>
Subject: Re: [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs
Date: Wed, 7 May 2025 10:00:59 +0200 [thread overview]
Message-ID: <aBsTO4_LZoNniFS5@localhost.localdomain> (raw)
In-Reply-To: <b36a7cb6-582b-422d-82ce-98dc8985fd0d@cloudflare.com>
On Tue, May 06, 2025 at 10:31:59PM -0700, Jesse Brandeburg wrote:
> On 4/22/25 8:36 AM, Michal Kubiak wrote:
> > Hi,
> >
> > Some of our customers have reported a crash problem when trying to load
> > the XDP program on machines with a large number of CPU cores. After
> > extensive debugging, it became clear that the root cause of the problem
> > lies in the Tx scheduler implementation, which does not seem to be able
> > to handle the creation of a large number of Tx queues (even though this
> > number does not exceed the number of available queues reported by the
> > FW).
> > This series addresses this problem.
>
>
> Hi Michal,
>
> Unfortunately this version of the series seems to reintroduce the original
> problem error: -22.
Hi Jesse,
Thanks for testing and reporting!
I will take a look at the problem and try to reproduce it locally. I would also
have a few questions inline.
First, was your original problem not the failure with error: -5? Or did you have
both (-5 and -22), depending on the scenario/environment?
I am asking because it seems that these two errors occurred at different
initialization stages of the tx scheduler. Of course, the series
was intended to address both of these issues.
>
> I double checked the patches, they looked like they were applied in our test
> version 2025.5.8 build which contained a 6.12.26 kernel with this series
> applied (all 3)
>
> Our setup is saying max 252 combined queues, but running 384 CPUs by
> default, loads an XDP program, then reduces the number of queues using
> ethtool, to 192. After that we get the error -22 and link is down.
>
To be honest, I did not test the scenario in which the number of queues is
reduced while the XDP program is running. This is the first thing I will check.
Can you please confirm that you did that step on both the current
and the draft version of the series?
It would also be interesting to check what happens if the queue number is reduced
before loading the XDP program.
> Sorry to bring some bad news, and I know it took a while, it is a bit of a
> process to test this in our lab.
>
> The original version you had sent us was working fine when we tested it, so
> the problem seems to be between those two versions. I suppose it could be
> possible (but unlikely because I used git to apply the patches) that there
> was something wrong with the source code, but I sincerely doubt it as the
> patches had applied cleanly.
The current series contains mostly some cosmetic fixes, but the potential
regression is still possible, so I will take a look at the diff.
>
> We are only able to test 6.12.y or 6.6.y stable variants of the kernel if
> you want to make a test version of a fixed series for us to try.
>
> Thanks,
>
> Jesse
>
I will keep you updated on my findings.
Thanks,
Michal
next prev parent reply other threads:[~2025-05-07 8:01 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-22 15:36 [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs Michal Kubiak
2025-04-22 15:36 ` [PATCH iwl-net 1/3] ice: fix Tx scheduler error handling in XDP callback Michal Kubiak
2025-04-22 17:02 ` [Intel-wired-lan] " Loktionov, Aleksandr
2025-04-22 15:36 ` [PATCH iwl-net 2/3] ice: create new Tx scheduler nodes for new queues only Michal Kubiak
2025-04-22 15:36 ` [PATCH iwl-net 3/3] ice: fix rebuilding the Tx scheduler tree for large queue counts Michal Kubiak
2025-05-07 5:31 ` [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs Jesse Brandeburg
2025-05-07 8:00 ` Michal Kubiak [this message]
2025-05-08 5:51 ` Jesse Brandeburg
2025-05-08 14:29 ` Michal Kubiak
2025-05-09 10:07 ` Michal Kubiak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aBsTO4_LZoNniFS5@localhost.localdomain \
--to=michal.kubiak@intel.com \
--cc=aleksander.lobakin@intel.com \
--cc=dawid.osuchowski@linux.intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jbrandeburg@cloudflare.com \
--cc=kernel-team@cloudflare.com \
--cc=maciej.fijalkowski@intel.com \
--cc=netdev@vger.kernel.org \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).