BPF List
 help / color / mirror / Atom feed
From: Song Liu <songliubraving@meta.com>
To: Song Liu <songliubraving@meta.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Andrii Nakryiko <andrii.nakryiko@gmail.com>,
	Ilya Leoshkevich <iii@linux.ibm.com>, Song Liu <song@kernel.org>,
	bpf <bpf@vger.kernel.org>, Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	Martin KaFai Lau <martin.lau@kernel.org>,
	Kernel Team <kernel-team@meta.com>, Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH v2 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket
Date: Thu, 5 Oct 2023 01:50:45 +0000	[thread overview]
Message-ID: <A53BABCE-A22D-40B0-91BA-009B54AB8F09@fb.com> (raw)
In-Reply-To: <FCAD3D3A-B230-40D8-A422-DED507B95C89@fb.com>



> On Oct 4, 2023, at 5:11 PM, Song Liu <songliubraving@meta.com> wrote:
> 
> 
> 
>> On Oct 4, 2023, at 9:18 AM, Song Liu <songliubraving@meta.com> wrote:
>> 
>> 
>> 
>>> On Oct 3, 2023, at 8:33 PM, Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>>> 
>>> On Tue, Oct 3, 2023 at 8:08 PM Andrii Nakryiko
>>> <andrii.nakryiko@gmail.com> wrote:
>>>> 
>>>> On Tue, Oct 3, 2023 at 5:45 PM Song Liu <song@kernel.org> wrote:
>>>>> 
>>>>> htab_lock_bucket uses the following logic to avoid recursion:
>>>>> 
>>>>> 1. preempt_disable();
>>>>> 2. check percpu counter htab->map_locked[hash] for recursion;
>>>>> 2.1. if map_lock[hash] is already taken, return -BUSY;
>>>>> 3. raw_spin_lock_irqsave();
>>>>> 
>>>>> However, if an IRQ hits between 2 and 3, BPF programs attached to the IRQ
>>>>> logic will not able to access the same hash of the hashtab and get -EBUSY.
>>>>> This -EBUSY is not really necessary. Fix it by disabling IRQ before
>>>>> checking map_locked:
>>>>> 
>>>>> 1. preempt_disable();
>>>>> 2. local_irq_save();
>>>>> 3. check percpu counter htab->map_locked[hash] for recursion;
>>>>> 3.1. if map_lock[hash] is already taken, return -BUSY;
>>>>> 4. raw_spin_lock().
>>>>> 
>>>>> Similarly, use raw_spin_unlock() and local_irq_restore() in
>>>>> htab_unlock_bucket().
>>>>> 
>>>>> Suggested-by: Tejun Heo <tj@kernel.org>
>>>>> Signed-off-by: Song Liu <song@kernel.org>
>>>>> 
>>>>> ---
>>>>> Changes in v2:
>>>>> 1. Use raw_spin_unlock() and local_irq_restore() in htab_unlock_bucket().
>>>>> (Andrii)
>>>>> ---
>>>>> kernel/bpf/hashtab.c | 7 +++++--
>>>>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>>>> 
>>>> 
>>>> Now it's more symmetrical and seems correct to me, thanks!
>>>> 
>>>> Acked-by: Andrii Nakryiko <andrii@kernel.org>
>>>> 
>>>>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>>>>> index a8c7e1c5abfa..fd8d4b0addfc 100644
>>>>> --- a/kernel/bpf/hashtab.c
>>>>> +++ b/kernel/bpf/hashtab.c
>>>>> @@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
>>>>>      hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
>>>>> 
>>>>>      preempt_disable();
>>>>> +       local_irq_save(flags);
>>>>>      if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
>>>>>              __this_cpu_dec(*(htab->map_locked[hash]));
>>>>> +               local_irq_restore(flags);
>>>>>              preempt_enable();
>>>>>              return -EBUSY;
>>>>>      }
>>>>> 
>>>>> -       raw_spin_lock_irqsave(&b->raw_lock, flags);
>>>>> +       raw_spin_lock(&b->raw_lock);
>>> 
>>> Song,
>>> 
>>> take a look at s390 crash in BPF CI.
>>> I suspect this patch is causing it.
>> 
>> It indeed looks like triggered by this patch. But I haven't figured
>> out why it happens. v1 seems ok for the same tests. 
> 
> I guess I finally figured out this (should be simple) bug. If I got it 
> correctly, we need:
> 
> diff --git c/kernel/bpf/hashtab.c w/kernel/bpf/hashtab.c
> index fd8d4b0addfc..1cfa2329a53a 100644
> --- c/kernel/bpf/hashtab.c
> +++ w/kernel/bpf/hashtab.c
> @@ -160,6 +160,7 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
>                __this_cpu_dec(*(htab->map_locked[hash]));
>                local_irq_restore(flags);
>                preempt_enable();
> +               *pflags = flags;
>                return -EBUSY;
>        }

No... I was totally wrong. This is not needed. 

Trying something different..

Thanks,
Song

  reply	other threads:[~2023-10-05  1:50 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-04  0:43 [PATCH v2 bpf-next] bpf: Avoid unnecessary -EBUSY from htab_lock_bucket Song Liu
2023-10-04  3:08 ` Andrii Nakryiko
2023-10-04  3:33   ` Alexei Starovoitov
2023-10-04 16:18     ` Song Liu
2023-10-05  0:11       ` Song Liu
2023-10-05  1:50         ` Song Liu [this message]
2023-10-06  1:15           ` Song Liu
2023-10-09 14:58             ` Ilya Leoshkevich
2023-10-10 20:48             ` Ilya Leoshkevich
2023-10-10 22:13               ` Song Liu
2023-10-12  0:42                 ` Andrii Nakryiko
2023-10-12  6:01                   ` Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A53BABCE-A22D-40B0-91BA-009B54AB8F09@fb.com \
    --to=songliubraving@meta.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=andrii.nakryiko@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=iii@linux.ibm.com \
    --cc=kernel-team@meta.com \
    --cc=martin.lau@kernel.org \
    --cc=song@kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox