netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: Mina Almasry <almasrymina@google.com>, Jakub Kicinski <kuba@kernel.org>
Cc: davem@davemloft.net, netdev@vger.kernel.org, edumazet@google.com,
	pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org,
	hawk@kernel.org, ilias.apalodimas@linaro.org,
	asml.silence@gmail.com, kaiyuanz@google.com, willemb@google.com,
	mkarsten@uwaterloo.ca, jdamato@fastly.com
Subject: Re: [PATCH net] net: page_pool: don't try to stash the napi id
Date: Fri, 24 Jan 2025 23:18:08 +0100	[thread overview]
Message-ID: <87r04rq2jj.fsf@toke.dk> (raw)
In-Reply-To: <CAHS8izNdpe7rDm7K4zn4QU-6VqwMwf-LeOJrvXOXhpaikY+tLg@mail.gmail.com>

Mina Almasry <almasrymina@google.com> writes:

> On Thu, Jan 23, 2025 at 3:16 PM Jakub Kicinski <kuba@kernel.org> wrote:
>>
>> Page ppol tried to cache the NAPI ID in page pool info to avoid
>
> Page pool
>
>> having a dependency on the life cycle of the NAPI instance.
>> Since commit under Fixes the NAPI ID is not populated until
>> napi_enable() and there's a good chance that page pool is
>> created before NAPI gets enabled.
>>
>> Protect the NAPI pointer with the existing page pool mutex,
>> the reading path already holds it. napi_id itself we need
>
> The reading paths in page_pool.c don't hold the lock, no? Only the
> reading paths in page_pool_user.c seem to do.
>
> I could not immediately wrap my head around why pool->p.napi can be
> accessed in page_pool_napi_local with no lock, but needs to be
> protected in the code in page_pool_user.c. It seems
> READ_ONCE/WRITE_ONCE protection is good enough to make sure
> page_pool_napi_local doesn't race with
> page_pool_disable_direct_recycling in a way that can crash (the
> reading code either sees a valid pointer or NULL). Why is that not
> good enough to also synchronize the accesses between
> page_pool_disable_direct_recycling and page_pool_nl_fill? I.e., drop
> the locking?

It actually seems that this is *not* currently the case. See the
discussion here:

https://lore.kernel.org/all/8734h8qgmz.fsf@toke.dk/

IMO (as indicated in the message linked above), we should require users
to destroy the page pool before freeing the NAPI memory, rather than add
additional synchronisation.

-Toke


  reply	other threads:[~2025-01-24 22:18 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-23 23:16 [PATCH net] net: page_pool: don't try to stash the napi id Jakub Kicinski
2025-01-24 21:00 ` Mina Almasry
2025-01-24 22:18   ` Toke Høiland-Jørgensen [this message]
2025-01-24 23:49     ` Mina Almasry
2025-01-27 13:31       ` Toke Høiland-Jørgensen
2025-01-27 19:37         ` Jakub Kicinski
2025-01-27 19:41           ` Mina Almasry
2025-01-25  1:31     ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87r04rq2jj.fsf@toke.dk \
    --to=toke@redhat.com \
    --cc=almasrymina@google.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=asml.silence@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jdamato@fastly.com \
    --cc=kaiyuanz@google.com \
    --cc=kuba@kernel.org \
    --cc=mkarsten@uwaterloo.ca \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).