netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Kubiak <michal.kubiak@intel.com>
To: intel-wired-lan@lists.osuosl.org
Cc: maciej.fijalkowski@intel.com, aleksander.lobakin@intel.com,
	przemyslaw.kitszel@intel.com, dawid.osuchowski@linux.intel.com,
	jacob.e.keller@intel.com, netdev@vger.kernel.org,
	Michal Kubiak <michal.kubiak@intel.com>
Subject: [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs
Date: Tue, 22 Apr 2025 17:36:56 +0200	[thread overview]
Message-ID: <20250422153659.284868-1-michal.kubiak@intel.com> (raw)

Hi,

Some of our customers have reported a crash problem when trying to load
the XDP program on machines with a large number of CPU cores. After
extensive debugging, it became clear that the root cause of the problem
lies in the Tx scheduler implementation, which does not seem to be able
to handle the creation of a large number of Tx queues (even though this
number does not exceed the number of available queues reported by the
FW).
This series addresses this problem.

First of all, the XDP callback should not crash even if the Tx scheduler
returns an error, so Patch #1 fixes this error handling and makes the
XDP callback fail gracefully.
Patch #2 fixes the problem where the Tx scheduler tries to create too
many nodes even though some of them have already been added to the
scheduler tree.
Finally, Patch #3 implements an improvement to the Tx scheduler tree
rebuild algorithm to add another VSI support node if it is necessary to
support all requested Tx rings.

As testing hints, I include sample failure scenarios below:
  1) Number of LAN Tx/Rx queue pairs: 128
     Number of requested XDP queues: >= 321 and <= 640
     Error message:
        Failed to set LAN Tx queue context, error: -22
  2) Number of LAN Tx/Rx queue pairs: 128
     Number of requested XDP queues: >= 641
     Error message:
        Failed VSI LAN queue config for XDP, error: -5

Thanks,
Michal


Michal Kubiak (3):
  ice: fix Tx scheduler error handling in XDP callback
  ice: create new Tx scheduler nodes for new queues only
  ice: fix rebuilding the Tx scheduler tree for large queue counts

 drivers/net/ethernet/intel/ice/ice_main.c  |  47 +++++++---
 drivers/net/ethernet/intel/ice/ice_sched.c | 103 +++++++++++++++++++--
 2 files changed, 126 insertions(+), 24 deletions(-)

-- 
2.45.2


             reply	other threads:[~2025-04-22 15:37 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-22 15:36 Michal Kubiak [this message]
2025-04-22 15:36 ` [PATCH iwl-net 1/3] ice: fix Tx scheduler error handling in XDP callback Michal Kubiak
2025-04-22 17:02   ` [Intel-wired-lan] " Loktionov, Aleksandr
2025-04-22 15:36 ` [PATCH iwl-net 2/3] ice: create new Tx scheduler nodes for new queues only Michal Kubiak
2025-04-22 15:36 ` [PATCH iwl-net 3/3] ice: fix rebuilding the Tx scheduler tree for large queue counts Michal Kubiak
2025-05-07  5:31 ` [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs Jesse Brandeburg
2025-05-07  8:00   ` Michal Kubiak
2025-05-08  5:51     ` Jesse Brandeburg
2025-05-08 14:29       ` Michal Kubiak
2025-05-09 10:07         ` Michal Kubiak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250422153659.284868-1-michal.kubiak@intel.com \
    --to=michal.kubiak@intel.com \
    --cc=aleksander.lobakin@intel.com \
    --cc=dawid.osuchowski@linux.intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jacob.e.keller@intel.com \
    --cc=maciej.fijalkowski@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).