From: Jakub Kicinski <kuba@kernel.org>
To: Mina Almasry <almasrymina@google.com>
Cc: davem@davemloft.net, netdev@vger.kernel.org, edumazet@google.com,
pabeni@redhat.com, hawk@kernel.org, ilias.apalodimas@linaro.org
Subject: Re: [PATCH net-next 05/15] net: page_pool: record pools per netdev
Date: Wed, 25 Oct 2023 13:17:40 -0700 [thread overview]
Message-ID: <20231025131740.489fdfcf@kernel.org> (raw)
In-Reply-To: <CAHS8izOTzLVxQ_rYt1vyhb=tgs2GAtuSZUWkZ183=7J3wEEzjQ@mail.gmail.com>
On Wed, 25 Oct 2023 12:56:44 -0700 Mina Almasry wrote:
> > +#if IS_ENABLED(CONFIG_PAGE_POOL)
> > + /** @page_pools: page pools created for this netdevice */
> > + struct hlist_head page_pools;
> > +#endif
>
> I wonder if this per netdev field is really necessary. Is it not
> possible to do the same simply looping over the (global) page_pools
> xarray? Or is that too silly of an idea. I guess on some systems you
> may end up with 100s or 1000s of active or orphaned page pools and
> then globally iterating over the whole page_pools xarray can be really
> slow..
I think we want the per-netdev hlist either way, on netdev
unregistration we need to find its pools to clear the pointers.
At which point we can as well use that to dump the pools.
I don't see a strong reason to use one approach over the other.
Note that other objects like napi and queues (WIP patches) also walk
netdevs and dump sub-objects from them.
> > @@ -48,6 +49,7 @@ struct pp_alloc_cache {
> > * @pool_size: size of the ptr_ring
> > * @nid: NUMA node id to allocate from pages from
> > * @dev: device, for DMA pre-mapping purposes
> > + * @netdev: netdev this pool will serve (leave as NULL if none or multiple)
>
> Is this an existing use case (page_pools that serve null or multiple
> netdevs), or a future use case? My understanding is that currently
> page_pools serve at most 1 rx-queue. Spot checking a few drivers that
> seems to be true.
I think I saw one embedded driver for a switch-like device which has
queues servicing all ports, and therefore netdevs.
We'd need some help from people using such devices to figure out what
the right way to represent them is, and what extra bits of
functionality they need.
> I'm guessing 1 is _always_ loopback?
AFAIK, yes. I should probably use LOOPBACK_IFINDEX, to make it clearer.
next prev parent reply other threads:[~2023-10-25 20:17 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-24 16:02 [PATCH net-next 00/15] net: page_pool: add netlink-based introspection Jakub Kicinski
2023-10-24 16:02 ` [PATCH net-next 01/15] net: page_pool: split the page_pool_params into fast and slow Jakub Kicinski
2023-11-09 8:13 ` Ilias Apalodimas
2023-10-24 16:02 ` [PATCH net-next 02/15] net: page_pool: avoid touching slow on the fastpath Jakub Kicinski
2023-11-09 9:00 ` Ilias Apalodimas
2023-10-24 16:02 ` [PATCH net-next 03/15] net: page_pool: factor out uninit Jakub Kicinski
2023-10-25 18:33 ` Mina Almasry
2023-10-24 16:02 ` [PATCH net-next 04/15] net: page_pool: id the page pools Jakub Kicinski
2023-10-25 18:49 ` Mina Almasry
2023-11-09 9:21 ` Ilias Apalodimas
2023-11-09 16:22 ` Jakub Kicinski
2023-11-09 16:48 ` Ilias Apalodimas
2023-10-24 16:02 ` [PATCH net-next 05/15] net: page_pool: record pools per netdev Jakub Kicinski
2023-10-24 17:31 ` David Ahern
2023-10-24 17:49 ` Jakub Kicinski
2023-10-24 19:19 ` David Ahern
2023-10-24 19:45 ` Jakub Kicinski
2023-10-25 19:56 ` Mina Almasry
2023-10-25 20:17 ` Jakub Kicinski [this message]
2023-11-09 3:28 ` Mina Almasry
2023-10-24 16:02 ` [PATCH net-next 06/15] net: page_pool: stash the NAPI ID for easier access Jakub Kicinski
2023-10-24 16:02 ` [PATCH net-next 07/15] eth: link netdev to page_pools in drivers Jakub Kicinski
2023-11-09 9:11 ` Ilias Apalodimas
2023-11-09 16:26 ` Jakub Kicinski
2023-11-09 16:51 ` Ilias Apalodimas
2023-10-24 16:02 ` [PATCH net-next 08/15] net: page_pool: add nlspec for basic access to page pools Jakub Kicinski
2023-10-24 16:02 ` [PATCH net-next 09/15] net: page_pool: implement GET in the netlink API Jakub Kicinski
2023-10-25 10:51 ` kernel test robot
2023-10-25 22:08 ` kernel test robot
2023-10-24 16:02 ` [PATCH net-next 10/15] net: page_pool: add netlink notifications for state changes Jakub Kicinski
2023-10-24 16:02 ` [PATCH net-next 11/15] net: page_pool: report amount of memory held by page pools Jakub Kicinski
2023-10-24 16:02 ` [PATCH net-next 12/15] net: page_pool: report when page pool was destroyed Jakub Kicinski
2023-11-09 17:05 ` Dragos Tatulea
2023-10-24 16:02 ` [PATCH net-next 13/15] net: page_pool: expose page pool stats via netlink Jakub Kicinski
2023-10-25 13:50 ` kernel test robot
2023-10-24 16:02 ` [PATCH net-next 14/15] net: page_pool: mute the periodic warning for visible page pools Jakub Kicinski
2023-10-24 16:02 ` [PATCH net-next 15/15] tools: ynl: add sample for getting page-pool information Jakub Kicinski
2023-11-09 8:11 ` [PATCH net-next 00/15] net: page_pool: add netlink-based introspection Ilias Apalodimas
2023-11-09 16:14 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231025131740.489fdfcf@kernel.org \
--to=kuba@kernel.org \
--cc=almasrymina@google.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).