netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Alexander Lobakin <aleksander.lobakin@intel.com>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	Michal Kubiak <michal.kubiak@intel.com>,
	Larysa Zaremba <larysa.zaremba@intel.com>,
	Alexander Duyck <alexanderduyck@fb.com>,
	"David Christensen" <drc@linux.vnet.ibm.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	"Paul Menzel" <pmenzel@molgen.mpg.de>, <netdev@vger.kernel.org>,
	<intel-wired-lan@lists.osuosl.org>,
	<linux-kernel@vger.kernel.org>,
	Amritha Nambiar <amritha.nambiar@intel.com>
Subject: Re: [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool)
Date: Mon, 11 Dec 2023 11:23:32 -0800	[thread overview]
Message-ID: <20231211112332.2abc94ae@kernel.org> (raw)
In-Reply-To: <03d7e8b0-8766-4f59-afd4-15b592693a83@intel.com>

On Mon, 11 Dec 2023 11:16:20 +0100 Alexander Lobakin wrote:
> Ideally, I'd like to pass a CPU ID this queue will be run on and use
> cpu_to_node(), but currently there's no NUMA-aware allocations in the
> Intel drivers and Rx queues don't get the corresponding CPU ID when
> configuring. I may revisit this later, but for now NUMA_NO_NODE is the
> most optimal solution here.

Hm, I've been wondering about persistent IRQ mappings. Drivers
resetting IRQ mapping on reconfiguration is a major PITA in production
clusters. You change the RSS hash and some NICs suddenly forget
affinitization 🤯️

The connection with memory allocations changes the math on that a bit.

The question is really whether we add CPU <> NAPI config as a netdev
Netlink API or build around the generic IRQ affinity API. The latter 
is definitely better from "don't duplicate uAPI" perspective.
But we need to reset the queues and reallocate their state when 
the mapping is changed. And shutting down queues on 

  echo $cpu > /../smp_affinity_list

seems moderately insane. Perhaps some middle-ground exists.

Anyway, if you do find cycles to tackle this - pls try to do it
generically not just for Intel? :)

  reply	other threads:[~2023-12-11 19:23 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-07 17:19 [PATCH net-next v6 00/12] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-12-07 17:19 ` [PATCH net-next v6 01/12] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 02/12] page_pool: don't use driver-set flags field directly Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 03/12] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 04/12] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-12-09  1:15   ` Jakub Kicinski
2023-12-07 17:20 ` [PATCH net-next v6 05/12] iavf: drop page splitting and recycling Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 06/12] page_pool: constify some read-only function arguments Alexander Lobakin
2023-12-12 16:14   ` Ilias Apalodimas
2023-12-07 17:20 ` [PATCH net-next v6 07/12] page_pool: add DMA-sync-for-CPU inline helper Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-12-08  9:28   ` Yunsheng Lin
2023-12-08 11:28     ` Yunsheng Lin
2023-12-11 10:16     ` Alexander Lobakin
2023-12-11 19:23       ` Jakub Kicinski [this message]
2023-12-13 11:23         ` [Intel-wired-lan] " Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 09/12] iavf: pack iavf_ring more efficiently Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 10/12] iavf: switch to Page Pool Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 11/12] libie: add common queue stats Alexander Lobakin
2023-12-07 17:20 ` [PATCH net-next v6 12/12] iavf: switch queue stats to libie Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231211112332.2abc94ae@kernel.org \
    --to=kuba@kernel.org \
    --cc=aleksander.lobakin@intel.com \
    --cc=alexanderduyck@fb.com \
    --cc=amritha.nambiar@intel.com \
    --cc=davem@davemloft.net \
    --cc=drc@linux.vnet.ibm.com \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=larysa.zaremba@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).