Intel-Wired-Lan Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Paul Menzel <pmenzel@molgen.mpg.de>,
	Maciej Fijalkowski <maciej.fijalkowski@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Amritha Nambiar <amritha.nambiar@intel.com>,
	Larysa Zaremba <larysa.zaremba@intel.com>,
	netdev@vger.kernel.org, Alexander Duyck <alexanderduyck@fb.com>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	linux-kernel@vger.kernel.org, Eric Dumazet <edumazet@google.com>,
	Michal Kubiak <michal.kubiak@intel.com>,
	intel-wired-lan@lists.osuosl.org,
	Yunsheng Lin <linyunsheng@huawei.com>,
	David Christensen <drc@linux.vnet.ibm.com>,
	Paolo Abeni <pabeni@redhat.com>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool)
Date: Wed, 13 Dec 2023 12:23:43 +0100	[thread overview]
Message-ID: <6431a069-6fc5-47ad-9519-868ae84b4a1a@intel.com> (raw)
In-Reply-To: <20231211112332.2abc94ae@kernel.org>

From: Jakub Kicinski <kuba@kernel.org>
Date: Mon, 11 Dec 2023 11:23:32 -0800

> On Mon, 11 Dec 2023 11:16:20 +0100 Alexander Lobakin wrote:
>> Ideally, I'd like to pass a CPU ID this queue will be run on and use
>> cpu_to_node(), but currently there's no NUMA-aware allocations in the
>> Intel drivers and Rx queues don't get the corresponding CPU ID when
>> configuring. I may revisit this later, but for now NUMA_NO_NODE is the
>> most optimal solution here.
> 
> Hm, I've been wondering about persistent IRQ mappings. Drivers
> resetting IRQ mapping on reconfiguration is a major PITA in production
> clusters. You change the RSS hash and some NICs suddenly forget
> affinitization 🤯️
> 
> The connection with memory allocations changes the math on that a bit.
> 
> The question is really whether we add CPU <> NAPI config as a netdev
> Netlink API or build around the generic IRQ affinity API. The latter 
> is definitely better from "don't duplicate uAPI" perspective.
> But we need to reset the queues and reallocate their state when 
> the mapping is changed. And shutting down queues on 
> 
>   echo $cpu > /../smp_affinity_list
> 
> seems moderately insane. Perhaps some middle-ground exists.
> 
> Anyway, if you do find cycles to tackle this - pls try to do it
> generically not just for Intel? :)

Sounds good, adding to my fathomless backlog :>

Thanks,
Olek
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

  reply	other threads:[~2023-12-13 11:25 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-07 17:19 [Intel-wired-lan] [PATCH net-next v6 00/12] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-12-07 17:19 ` [Intel-wired-lan] [PATCH net-next v6 01/12] page_pool: make sure frag API fields don't span between cachelines Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 02/12] page_pool: don't use driver-set flags field directly Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 03/12] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 04/12] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-12-09  1:15   ` Jakub Kicinski
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 05/12] iavf: drop page splitting and recycling Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 06/12] page_pool: constify some read-only function arguments Alexander Lobakin
2023-12-12 16:08   ` Ilias Apalodimas
2023-12-12 16:14   ` Ilias Apalodimas
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 07/12] page_pool: add DMA-sync-for-CPU inline helper Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 08/12] libie: add Rx buffer management (via Page Pool) Alexander Lobakin
2023-12-08  9:28   ` Yunsheng Lin
2023-12-08 11:28     ` Yunsheng Lin
2023-12-11 10:16     ` Alexander Lobakin
2023-12-11 19:23       ` Jakub Kicinski
2023-12-13 11:23         ` Alexander Lobakin [this message]
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 09/12] iavf: pack iavf_ring more efficiently Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 10/12] iavf: switch to Page Pool Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 11/12] libie: add common queue stats Alexander Lobakin
2023-12-07 17:20 ` [Intel-wired-lan] [PATCH net-next v6 12/12] iavf: switch queue stats to libie Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6431a069-6fc5-47ad-9519-868ae84b4a1a@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=alexanderduyck@fb.com \
    --cc=amritha.nambiar@intel.com \
    --cc=davem@davemloft.net \
    --cc=drc@linux.vnet.ibm.com \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=larysa.zaremba@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linyunsheng@huawei.com \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox