public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Cc: <intel-wired-lan@lists.osuosl.org>,
	Michal Kubiak <michal.kubiak@intel.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	"Przemek Kitszel" <przemyslaw.kitszel@intel.com>,
	Andrew Lunn <andrew+netdev@lunn.ch>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	Simon Horman <horms@kernel.org>, <bpf@vger.kernel.org>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next 06/16] idpf: a use saner limit for default number of queues to allocate
Date: Wed, 12 Mar 2025 18:22:06 +0100	[thread overview]
Message-ID: <ae42f7a3-10dd-4b5a-8bd0-fbab0148a419@intel.com> (raw)
In-Reply-To: <Z8rLLwZlRACyA49U@boxer>

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Date: Fri, 7 Mar 2025 11:32:15 +0100

> On Wed, Mar 05, 2025 at 05:21:22PM +0100, Alexander Lobakin wrote:
>> Currently, the maximum number of queues available for one vport is 16.
>> This is hardcoded, but then the function calculating the optimal number
>> of queues takes min(16, num_online_cpus()).
>> On order to be able to allocate more queues, which will be then used for
> 
> nit: s/On/In

Also "use a saner limit", not "a use saner limit" in the subject =\

> 
>> XDP, stop hardcoding 16 and rely on what the device gives us. Instead of
>> num_online_cpus(), which is considered suboptimal since at least 2013,
>> use netif_get_num_default_rss_queues() to still have free queues in the
>> pool.
> 
> Should we update older drivers as well?

That would be good.

For idpf, this is particularly important since the current logic eats
128 Tx queues for skb traffic on my Xeon out of 256 available by default
(per vport). On a 256-thread system, it would eat the whole limit,
leaving nothing for XDP >_< ice doesn't have a per-port limit IIRC.

> 
>> nr_cpu_ids number of Tx queues are needed only for lockless XDP sending,
>> the regular stack doesn't benefit from that anyhow.
>> On a 128-thread Xeon, this now gives me 32 regular Tx queues and leaves
>> 224 free for XDP (128 of which will handle XDP_TX, .ndo_xdp_xmit(), and
>> XSk xmit when enabled).

Thanks,
Olek

  reply	other threads:[~2025-03-12 17:22 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-05 16:21 [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 01/16] libeth: convert to netmem Alexander Lobakin
2025-03-06  0:13   ` Mina Almasry
2025-03-11 17:22     ` Alexander Lobakin
2025-03-11 17:43       ` Mina Almasry
2025-03-05 16:21 ` [PATCH net-next 02/16] libeth: support native XDP and register memory model Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 03/16] libeth: add a couple of XDP helpers (libeth_xdp) Alexander Lobakin
2025-03-11 14:05   ` Maciej Fijalkowski
2025-03-17 15:26     ` Alexander Lobakin
2025-03-19 16:19       ` Maciej Fijalkowski
2025-04-01 13:11         ` Alexander Lobakin
2025-04-08 13:38           ` Alexander Lobakin
2025-04-08 13:22     ` Alexander Lobakin
2025-04-08 13:51       ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 04/16] libeth: add XSk helpers Alexander Lobakin
2025-03-07 10:15   ` Maciej Fijalkowski
2025-03-12 17:03     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 05/16] idpf: fix Rx descriptor ready check barrier in splitq Alexander Lobakin
2025-03-07 10:17   ` Maciej Fijalkowski
2025-03-12 17:10     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 06/16] idpf: a use saner limit for default number of queues to allocate Alexander Lobakin
2025-03-07 10:32   ` Maciej Fijalkowski
2025-03-12 17:22     ` Alexander Lobakin [this message]
2025-03-05 16:21 ` [PATCH net-next 07/16] idpf: link NAPIs to queues Alexander Lobakin
2025-03-07 10:28   ` Eric Dumazet
2025-03-12 17:16     ` Alexander Lobakin
2025-03-18 17:10       ` Alexander Lobakin
2025-03-07 10:51   ` Maciej Fijalkowski
2025-03-12 17:25     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 08/16] idpf: make complq cleaning dependent on scheduling mode Alexander Lobakin
2025-03-07 11:11   ` Maciej Fijalkowski
2025-03-13 16:16     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 09/16] idpf: remove SW marker handling from NAPI Alexander Lobakin
2025-03-07 11:42   ` Maciej Fijalkowski
2025-03-13 16:50     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 10/16] idpf: add support for nointerrupt queues Alexander Lobakin
2025-03-07 12:10   ` Maciej Fijalkowski
2025-03-13 16:19     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 11/16] idpf: prepare structures to support XDP Alexander Lobakin
2025-03-07  1:12   ` Jakub Kicinski
2025-03-12 14:00     ` [Intel-wired-lan] " Alexander Lobakin
2025-03-07 13:27   ` Maciej Fijalkowski
2025-03-17 14:50     ` Alexander Lobakin
2025-03-19 16:29       ` Maciej Fijalkowski
2025-04-08 13:42         ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 12/16] idpf: implement XDP_SETUP_PROG in ndo_bpf for splitq Alexander Lobakin
2025-03-07 14:16   ` Maciej Fijalkowski
2025-03-17 14:58     ` Alexander Lobakin
2025-03-19 16:23       ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 13/16] idpf: use generic functions to build xdp_buff and skb Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 14/16] idpf: add support for XDP on Rx Alexander Lobakin
2025-03-11 15:50   ` Maciej Fijalkowski
2025-04-08 13:28     ` Alexander Lobakin
2025-04-08 15:53       ` Maciej Fijalkowski
2025-03-05 16:21 ` [PATCH net-next 15/16] idpf: add support for .ndo_xdp_xmit() Alexander Lobakin
2025-03-11 16:08   ` Maciej Fijalkowski
2025-04-08 13:31     ` Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 16/16] idpf: add XDP RSS hash hint Alexander Lobakin
2025-03-11 15:28 ` [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ae42f7a3-10dd-4b5a-8bd0-fbab0148a419@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox