From: Martin KaFai Lau <martin.lau@linux.dev>
To: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
Amery Hung <ameryhung@gmail.com>
Cc: bpf <bpf@vger.kernel.org>,
Network Development <netdev@vger.kernel.org>,
Andrii Nakryiko <andrii@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Kumar Kartikeya Dwivedi <memxor@gmail.com>,
Martin KaFai Lau <martin.lau@kernel.org>,
KP Singh <kpsingh@kernel.org>,
Yonghong Song <yonghong.song@linux.dev>,
Song Liu <song@kernel.org>, Hao Luo <haoluo@google.com>,
Kernel Team <kernel-team@meta.com>
Subject: Re: [RFC PATCH bpf-next v2 06/12] bpf: Change local_storage->lock and b->lock to rqspinlock
Date: Mon, 6 Oct 2025 13:19:43 -0700 [thread overview]
Message-ID: <d88addc3-bd10-4923-9ded-10fd999f39e7@linux.dev> (raw)
In-Reply-To: <CAADnVQ+=F5SkoRA4LU+JE+u87TLFp-mTS4bv+u9MUST2+CX8AA@mail.gmail.com>
On 10/6/25 10:58 AM, Alexei Starovoitov wrote:
> On Fri, Oct 3, 2025 at 3:03 PM Amery Hung <ameryhung@gmail.com> wrote:
>>
>> On Thu, Oct 2, 2025 at 4:37 PM Alexei Starovoitov
>> <alexei.starovoitov@gmail.com> wrote:
>>>
>>> On Thu, Oct 2, 2025 at 3:54 PM Amery Hung <ameryhung@gmail.com> wrote:
>>>>
>>>> bpf_selem_free_list(&old_selem_free_list, false);
>>>> if (alloc_selem) {
>>>> mem_uncharge(smap, owner, smap->elem_size);
>>>> @@ -791,7 +812,7 @@ void bpf_local_storage_destroy(struct bpf_local_storage *local_storage)
>>>> * when unlinking elem from the local_storage->list and
>>>> * the map's bucket->list.
>>>> */
>>>> - raw_spin_lock_irqsave(&local_storage->lock, flags);
>>>> + while (raw_res_spin_lock_irqsave(&local_storage->lock, flags));
>>>
>>> This pattern and other while(foo) doesn't make sense to me.
>>> res_spin_lock will fail only on deadlock or timeout.
>>> We should not spin, since retry will likely produce the same
>>> result. So the above pattern just enters into infinite spin.
>>
>> I only spin in destroy() and map_free(), which cannot deadlock with
>> itself or each other. However, IIUC, a head waiter that detects
>> deadlock will cause other queued waiters to also return -DEADLOCK. I
>> think they should be able to make progress with a retry.
>
> If it's in map_free() path then why are we taking the lock at all?
> There are supposed to be no active users of it.
There is no user from the syscall or from the bpf prog.
There are still kernel users, bpf_cgrp_storage_free() and bpf_sk_storage_free(),
that can race with map_free().
> If there are users and we actually need that lock then the deadlock
> is possible and retrying will deadlock the same way.
> I feel AI explained it better:
> "
> raw_res_spin_lock_irqsave() can return -ETIMEDOUT (after 250ms) or
> -EDEADLK. Both are non-zero, so the while() loop continues. The commit
> message says "it cannot deadlock with itself or
> bpf_local_storage_map_free", but:
>
> 1. If -ETIMEDOUT is returned because the lock holder is taking too long,
> retrying immediately won't help. The timeout means progress isn't
> being made, and spinning in a retry loop without any backoff or
> limit will prevent other work from proceeding.
>
> 2. If -EDEADLK is returned, it means the deadlock detector found a
> cycle. Retrying immediately without any state change won't break the
> deadlock cycle.
> "
>
>> Or better if
>> rqspinlock does not force queued waiters to exit the queue if it is
>> deadlock not timeout.
>
> If a deadlock is detected, it's the same issue for all waiters.
> I don't see the difference between timeout and deadlock.
> Both are in the "do-not-retry" category.
> Both mean that there is a bug somewhere.
Both bpf_cgrp_storage_free() and map_free() are the only remaining kernel users
of the locks, so no deadlock is expected unless there is a bug. The busy percpu
counter is currently not used in both of them also. Theoretically, the
res_spin_lock (and the current regular spin_lock) should never fail here in
bpf_cgrp_storage_free() and map_free(). If res_spin_lock returns error, there is
a bug somewhere.
>
>>>
>>> If it should never fail in practice then pr_warn_once and goto out
>>> leaking memory. Better yet defer to irq_work and cleanup there.>>
Not sure how to handle the bug. yeah, maybe just leak it and then splat.
I think deferring it still need to take the lock.
>> Hmm, both functions are already called in some deferred callbacks.
>> Even if we defer the cleanup again, they still need to grab locks and
>> still might fail, no?
>
> If it's a map destroy path and we waited for RCU GP, there shouldn't be
> a need to take a lock.
> The css_free_rwork_fn() -> bpf_cgrp_storage_free() path
> is currently implemented like it's similar to:
> bpf_cgrp_storage_delete() which needs a lock.
> But bpf_cgrp_storage_free() doesn't have to.
> In css_free_rwork_fn() no prog or user space
> should see 'cgrp' pointer, since we're about to kfree(cgrp); it.
> I could be certainly missing something.
next prev parent reply other threads:[~2025-10-06 20:20 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-02 22:53 [RFC PATCH bpf-next v2 00/12] Remove task and cgroup local storage percpu counters Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 01/12] bpf: Select bpf_local_storage_map_bucket based on bpf_local_storage Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 02/12] bpf: Convert bpf_selem_unlink_map to failable Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 03/12] bpf: Convert bpf_selem_link_map " Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 04/12] bpf: Open code bpf_selem_unlink_storage in bpf_selem_unlink Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 05/12] bpf: Convert bpf_selem_unlink to failable Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 06/12] bpf: Change local_storage->lock and b->lock to rqspinlock Amery Hung
2025-10-02 23:37 ` Alexei Starovoitov
2025-10-03 22:03 ` Amery Hung
2025-10-06 17:58 ` Alexei Starovoitov
2025-10-06 20:19 ` Martin KaFai Lau [this message]
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 07/12] bpf: Remove task local storage percpu counter Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 08/12] bpf: Remove cgroup " Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 09/12] bpf: Remove unused percpu counter from bpf_local_storage_map_free Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 10/12] selftests/bpf: Update task_local_storage/recursion test Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 11/12] selftests/bpf: Remove test_task_storage_map_stress_lookup Amery Hung
2025-10-02 22:53 ` [RFC PATCH bpf-next v2 12/12] selftests/bpf: Choose another percpu variable in bpf for btf_dump test Amery Hung
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d88addc3-bd10-4923-9ded-10fd999f39e7@linux.dev \
--to=martin.lau@linux.dev \
--cc=alexei.starovoitov@gmail.com \
--cc=ameryhung@gmail.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=haoluo@google.com \
--cc=kernel-team@meta.com \
--cc=kpsingh@kernel.org \
--cc=martin.lau@kernel.org \
--cc=memxor@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=song@kernel.org \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox