All of lore.kernel.org
 help / color / mirror / Atom feed
From: Puranjay Mohan <puranjay@kernel.org>
To: Aaron Esau <aaron1esau@gmail.com>, bpf@vger.kernel.org
Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
	Kumar Kartikeya Dwivedi <memxor@gmail.com>
Subject: Re: [BUG] bpf: use-after-free in hashtab BPF_F_LOCK in-place update path
Date: Thu, 26 Mar 2026 15:02:06 +0000	[thread overview]
Message-ID: <m2o6ka7hxt.fsf@kernel.org> (raw)
In-Reply-To: <m2se9mg16x.fsf@kernel.org>

Puranjay Mohan <puranjay@kernel.org> writes:

> Aaron Esau <aaron1esau@gmail.com> writes:
>
>> Reported-by: Aaron Esau <aaron1esau@gmail.com>
>>
>> htab_map_update_elem() has a use-after-free when BPF_F_LOCK is used
>> for in-place updates.
>>
>> The BPF_F_LOCK path calls lookup_nulls_elem_raw() without holding the
>> bucket lock, then dereferences the element via copy_map_value_locked().
>> A concurrent htab_map_delete_elem() can delete and free the element
>> between these steps.
>>
>> free_htab_elem() uses bpf_mem_cache_free(), which immediately returns
>> the object to the per-CPU free list (not RCU-deferred). The memory may
>> be reallocated before copy_map_value_locked() executes, leading to
>> writes into a different element.
>>
>> When lookup succeeds (l_old != NULL), the in-place update path returns
>> early, so the “full lookup under lock” path is not taken.
>>
>> Race:
>>
>>   CPU 0: htab_map_update_elem (BPF_F_LOCK)
>>          lookup_nulls_elem_raw() → E (no bucket lock)
>>          ...
>>   CPU 1: htab_map_delete_elem()
>>          htab_lock_bucket → hlist_nulls_del_rcu → htab_unlock_bucket
>>          free_htab_elem → bpf_mem_cache_free (immediate free)
>>   CPU 1: htab_map_update_elem (new key)
>>          alloc_htab_elem → reuses E
>>   CPU 0: copy_map_value_locked(E, ...) → writes into reused object
>>
>> Reproduction:
>>
>>   1. Create BPF_MAP_TYPE_HASH with a value containing bpf_spin_lock
>>      (max_entries=64, 7 u64 fields + lock).
>>   2. Threads A: BPF_MAP_UPDATE_ELEM with BPF_F_LOCK (pattern 0xAAAA...)
>>   3. Threads B: DELETE + UPDATE (pattern 0xBBBB...) on same keys
>>   4. Threads C: same as A (pattern 0xCCCC...)
>>   5. Verifier threads: LOOKUP loop, detect mixed-pattern values
>>   6. Run 60s on >=4 CPUs
>>
>> Attached a POC. On 6.19.9 (4 vCPU QEMU, CONFIG_PREEMPT=y),
>> I observed ~645 torn values in 2.5M checks (~0.026%).
>>
>> Fixes: 96049f3afd50 ("bpf: introduce BPF_F_LOCK flag")
>
> Although this is a real issue, your reproducer is not accurate, it will
> see torn writes even without the UAF issue, because the verifier thread

After reading the code and discussing with Kumar, this is expected
behaviour. BPF_F_LOCK performs a lockless lookup and takes only the
element's embedded spin_lock for in-place value updates. It does not
synchronize against concurrent deletes, preallocated hash maps have had
the same semantics since BPF_F_LOCK was introduced. Programs that need
both parallel deletes and updates must handle that coordination
themselves.

Thanks,
Puranjay

  parent reply	other threads:[~2026-03-26 15:02 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-26  8:49 [BUG] bpf: use-after-free in hashtab BPF_F_LOCK in-place update path Aaron Esau
2026-03-26 13:39 ` Puranjay Mohan
2026-03-26 14:58   ` Kumar Kartikeya Dwivedi
2026-03-26 15:02   ` Puranjay Mohan [this message]
2026-03-26 15:26   ` Mykyta Yatsenko
2026-03-26 15:33     ` Puranjay Mohan
2026-03-26 15:43       ` Mykyta Yatsenko
2026-03-26 15:47         ` Mykyta Yatsenko
2026-03-26 15:57           ` Puranjay Mohan
2026-03-27  2:44             ` Aaron Esau
2026-03-27  3:21               ` Kumar Kartikeya Dwivedi
2026-03-27 16:09                 ` Mykyta Yatsenko
2026-03-31  0:55                   ` Kumar Kartikeya Dwivedi
2026-03-31 11:39                     ` Mykyta Yatsenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m2o6ka7hxt.fsf@kernel.org \
    --to=puranjay@kernel.org \
    --cc=aaron1esau@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=memxor@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.