From: "Toke Høiland-Jørgensen" <toke@kernel.org>
To: Alexander Lobakin <aleksander.lobakin@intel.com>,
Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com,
intel-wired-lan@lists.osuosl.org, magnus.karlsson@intel.com,
fred@cloudflare.com
Subject: Re: [Intel-wired-lan] [PATCH iwl-next] ice: allow hot-swapping XDP programs
Date: Wed, 14 Jun 2023 15:47:02 +0200 [thread overview]
Message-ID: <87sfaujgvd.fsf@toke.dk> (raw)
In-Reply-To: <b3f96eb9-25c7-eeaf-3e0d-7c055939168b@intel.com>
Alexander Lobakin <aleksander.lobakin@intel.com> writes:
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Date: Wed, 14 Jun 2023 14:50:28 +0200
>
>> On Wed, Jun 14, 2023 at 02:40:07PM +0200, Alexander Lobakin wrote:
>>> From: Toke Høiland-Jørgensen <toke@kernel.org>
>>> Date: Tue, 13 Jun 2023 19:59:37 +0200
>
> [...]
>
>>> What if a NAPI polling cycle is being run on one core while at the very
>>> same moment I'm replacing the XDP prog on another core? Not in terms of
>>> pointer tearing, I see now that this is handled correctly, but in terms
>>> of refcounts? Can't bpf_prog_put() free it while the polling is still
>>> active?
>>
>> Hmm you mean we should do bpf_prog_put() *after* we update bpf_prog on
>> ice_rx_ring? I think this is a fair point as we don't bump the refcount
>> per each Rx ring that holds the ptr to bpf_prog, we just rely on the main
>> one from VSI.
>
> Not even after we update it there. I believe we should synchronize NAPI
> cycles with BPF prog update (have synchronize_rcu() before put or so to
> make the config path wait until there's no polling and onstack pointers,
> would that be enough?).
>
> NAPI polling starts
> |<--- XDP prog pointer is placed on the stack and used from there
> |
> | <--- here you do xchg() and bpf_prog_put()
> | <--- here you update XDP progs on the rings
> |
> |<--- polling loop is still using the [now invalid] onstack pointer
> |
> NAPI polling ends
No, this is fine; bpf_prog_put() uses call_rcu() to actually free the
program, which guarantees that any ongoing RCU critical sections have
ended before. And as explained in that other series of mine, this
includes any ongoing NAPI poll cycles.
-Toke
next prev parent reply other threads:[~2023-06-14 13:47 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-13 15:10 [PATCH iwl-next] ice: allow hot-swapping XDP programs Maciej Fijalkowski
2023-06-13 15:15 ` [Intel-wired-lan] " Alexander Lobakin
2023-06-13 15:20 ` Maciej Fijalkowski
2023-06-13 17:59 ` Toke Høiland-Jørgensen
2023-06-14 12:40 ` Alexander Lobakin
2023-06-14 12:50 ` Maciej Fijalkowski
2023-06-14 13:25 ` Alexander Lobakin
2023-06-14 13:47 ` Toke Høiland-Jørgensen [this message]
2023-06-14 14:03 ` Alexander Lobakin
2023-06-14 13:42 ` Toke Høiland-Jørgensen
2023-06-13 16:43 ` kernel test robot
2023-06-13 17:58 ` kernel test robot
2023-06-13 22:19 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87sfaujgvd.fsf@toke.dk \
--to=toke@kernel.org \
--cc=aleksander.lobakin@intel.com \
--cc=anthony.l.nguyen@intel.com \
--cc=fred@cloudflare.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).