From: Mykyta Yatsenko <mykyta.yatsenko5@gmail.com>
To: sashiko@lists.linux.dev
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH bpf-next v3 04/10] bpf: Implement batch ops and iterators for resizable hashtab
Date: Mon, 27 Apr 2026 14:36:59 +0100 [thread overview]
Message-ID: <f884c8f3-8516-4852-9460-57fbf806fc86@gmail.com> (raw)
In-Reply-To: <dd0057c1-e0b2-4559-afe9-2bf19760f68b@gmail.com>
On 4/25/26 10:24 PM, Mykyta Yatsenko wrote:
>
>
> On 4/24/26 9:28 PM, sashiko-bot@kernel.org wrote:
>> Sashiko AI review found 4 potential issue(s):
>> - [High] Cursor invalidation in batch and seq_file iterators causes O(N^2) complexity and infinite loops.
>> - [High] Unbounded RCU read lock duration in batch operations allows user-triggered RCU stalls.
>> - [High] Map iterators and seq_file dumps spuriously abort during normal background resizes.
>> - [High] Deadlock risk in `bpf_each_rhash_elem` due to acquiring `spin_lock` from atomic contexts.
>> --
>>
>> commit 9f7ce51619eb27e8a4a65df12f862e12e34bc285
>> Author: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> bpf: Implement batch ops and iterators for resizable hashtab
>>
>> This commit adds batch operations for resizable hashtab maps, using
>> rhashtable_walk_enter_from() to resume iteration. It also wires up
>> seq_file BPF iterators and implements rhtab_map_mem_usage().
>>
>>> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
>>> index d37f3d548d3658..000caa2c7f4cd9 100644
>>> --- a/kernel/bpf/hashtab.c
>>> +++ b/kernel/bpf/hashtab.c
>>
>> [ ... ]
>>
>>> @@ -3066,64 +3066,330 @@ static int rhtab_map_get_next_key(struct bpf_map *map, void *key, void *next_key
>>> +static long bpf_each_rhash_elem(struct bpf_map *map, bpf_callback_t callback_fn,
>>> + void *callback_ctx, u64 flags)
>>> +{
>>
>> [ ... ]
>>
>>> + /*
>>> + * The rhashtable walk API uses spin_lock in rhashtable_walk_start/stop,
>>> + * which is not safe in NMI or soft/hard IRQ context.
>>> + */
>>> + if (in_nmi() || in_hardirq() || in_softirq())
>>> + return -EOPNOTSUPP;
>>> +
>>> + rhashtable_walk_enter(&rhtab->ht, &iter);
>>
>> Does this adequately protect against deadlocks when called from kprobes or
>> tracepoints in process context with preemption disabled?
>>
>> If a thread currently holding ht->lock (e.g., during map insertion) is
>> interrupted by a kprobe that runs this BPF program, attempting to acquire
>> the same spin_lock inside rhashtable_walk_enter() might cause a deadlock.
>>
>>> + rhashtable_walk_start(&iter);
>>> +
>>> + while ((elem = rhashtable_walk_next(&iter))) {
>>> + /* rhashtable_walk_next returns -EAGAIN on resize, abort */
>>> + if (IS_ERR(elem)) {
>>> + num_elems = -EBUSY;
>>> + break;
>>> + }
>>
>> Is it expected to abort the entire iteration if a background map resize
>> occurs?
>>
>> rhashtable_walk_next() returns -EAGAIN to indicate the table was resized and
>> the iterator was rewound. Since resizable hashtabs automatically resize
>> during normal workloads, returning -EBUSY here could cause
>> bpf_for_each_map_elem() and bpftool map dumps to randomly fail.
>>
>> Should this handle -EAGAIN by retrying or continuing the walk instead?
>
> No, resize is not handled automatically.
>
>>
>> [ ... ]
>>
>>> +static int __rhtab_map_lookup_and_delete_batch(struct bpf_map *map,
>>> + const union bpf_attr *attr,
>>> + union bpf_attr __user *uattr,
>>> + bool do_delete)
>>> +{
>>
>> [ ... ]
>>
>>> + /*
>>> + * Use the last key from the previous batch as cursor.
>>> + * enter_from positions at that key's bucket, walk_next
>>> + * returns the successor in O(1).
>>> + * First call (ubatch == NULL): starts from bucket 0.
>>> + */
>>> + if (ubatch) {
>>> + buf = kmalloc(key_size, GFP_USER | __GFP_NOWARN);
>>> + if (!buf) {
>>> + ret = -ENOMEM;
>>> + goto free;
>>> + }
>>> + if (copy_from_user(buf, ubatch, key_size)) {
>>> + ret = -EFAULT;
>>> + goto free;
>>> + }
>>> + }
>>> +
>>> + scoped_guard(rcu) {
>>> + rhashtable_walk_enter_from(&rhtab->ht, &iter, buf, rhtab->params);
>>> + rhashtable_walk_start(&iter);
>>> + }
>>
>> If the key stored in buf was already deleted, will this iteration restart
>> from the beginning of the map?
>>
>
>
> yes, if cursor is deleted, iteration is restarted.
>
>> Looking at rhashtable_walk_enter_from(), if it fails to find the key, it
>> leaves iter->slot at 0. Then rhashtable_walk_start() and walk_next() will
>> start from bucket 0 instead of where the previous batch left off.
>>
>> Since lookup_and_delete_batch deletes the elements returned in the previous
>> batch, the cursor key will always be deleted. Could this cause the iterator
>> to repeatedly scan over all emptied buckets, resulting in an expensive
>> traversal?
>>
>
> If too many elements are deleted, the table will be resized, so the
> number of empty buckets traversed should not be very big. But this is a
> good point, we may not need to store the cursor at all.
Probably a safer fix would be just to use the next element as a cursor.
>
>> Similarly, for lookup_batch and seq_file dumps, if the cursor is
>> concurrently deleted, could this trap userspace in an infinite loop of
>> duplicate elements?
>>
>
> This should not cause infinite loop, if cursor is deleted every time,
> we'll run out of elements. Table resize exposes any iterators to the risk
> of duplicate elements, it's up to user to make sure table is in steady
> state when iterating.
>
>>> +
>>> + dst_key = keys;
>>> + dst_val = values;
>>> + total = 0;
>>> +
>>> + while (total < max_count) {
>>> + elem = rhtab_iter_next(&iter);
>>> + if (!elem)
>>> + break;
>>
>> Can this loop cause RCU stalls or soft lockups if max_count is very large?
>>
>> The max_count parameter is controlled by userspace. Since
>> rhashtable_walk_start() acquires the RCU read-side critical section, this
>> loop might iterate, copy memory, and process millions of elements without
>> ever dropping the RCU lock or calling cond_resched().
>>
>
> Existing htab does handle this situation; maybe splitting big batch into
> smaller chunks with
> ```
> rcu_read_unlock();
> bpf_enable_instrumentation();
> ```
> in between will help.
>
>> [ ... ]
>>
>>> +static void *bpf_rhash_map_seq_start(struct seq_file *seq, loff_t *pos)
>>> + __acquires(RCU)
>>> +{
>>
>> [ ... ]
>>
>>> + elem = rhashtable_walk_next(&info->iter);
>>> + /* rhashtable_walk_next returns -EAGAIN on resize, abort */
>>> + if (IS_ERR(elem))
>>> + return ERR_PTR(-EBUSY);
>>
>> Similar to bpf_each_rhash_elem, will returning -EBUSY here on normal hash
>> table resizes cause userspace map dumps to fail spuriously?
>>
>
> This is by design.
>
>> [ ... ]
>>
>>> +static void *bpf_rhash_map_seq_next(struct seq_file *seq, void *v, loff_t *pos)
>>> +{
>>
>> [ ... ]
>>
>>> + elem = rhashtable_walk_next(&info->iter);
>>> + /* rhashtable_walk_next returns -EAGAIN on resize, abort */
>>> + if (IS_ERR(elem))
>>> + return ERR_PTR(-EBUSY);
>>> +
>>> + return elem;
>>> +}
>>
>> Again, aborting on -EAGAIN here might interrupt the map dump when a resize
>> happens in the background. Is there a way to handle -EAGAIN without failing
>> the entire sequence?
>>
>
next prev parent reply other threads:[~2026-04-27 13:37 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-24 19:50 [PATCH bpf-next v3 00/10] bpf: Introduce resizable hash map Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 01/10] bpf: Implement resizable hashmap basic functions Mykyta Yatsenko
2026-04-24 20:40 ` sashiko-bot
2026-04-25 20:41 ` Mykyta Yatsenko
2026-04-24 20:45 ` bot+bpf-ci
2026-04-25 20:50 ` Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 02/10] rhashtable: Add rhashtable_walk_enter_from() Mykyta Yatsenko
2026-04-24 20:15 ` sashiko-bot
2026-04-24 20:45 ` bot+bpf-ci
2026-04-28 10:35 ` Herbert Xu
2026-04-24 19:50 ` [PATCH bpf-next v3 03/10] bpf: Implement get_next_key() resizable hashtab Mykyta Yatsenko
2026-04-28 10:33 ` Herbert Xu
2026-04-28 13:20 ` Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 04/10] bpf: Implement batch ops and iterators for " Mykyta Yatsenko
2026-04-24 20:28 ` sashiko-bot
2026-04-25 21:24 ` Mykyta Yatsenko
2026-04-27 13:36 ` Mykyta Yatsenko [this message]
2026-04-24 19:50 ` [PATCH bpf-next v3 05/10] bpf: Allow timers, workqueues and task_work in " Mykyta Yatsenko
2026-04-24 21:05 ` sashiko-bot
2026-04-25 21:29 ` Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 06/10] libbpf: Support resizable hashtable Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 07/10] selftests/bpf: Add basic tests for resizable hash map Mykyta Yatsenko
2026-04-24 20:02 ` sashiko-bot
2026-04-24 20:32 ` bot+bpf-ci
2026-04-24 19:50 ` [PATCH bpf-next v3 08/10] selftests/bpf: Add BPF iterator " Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 09/10] bpftool: Add rhash map documentation Mykyta Yatsenko
2026-04-24 19:50 ` [PATCH bpf-next v3 10/10] selftests/bpf: Add resizable hashmap to benchmarks Mykyta Yatsenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f884c8f3-8516-4852-9460-57fbf806fc86@gmail.com \
--to=mykyta.yatsenko5@gmail.com \
--cc=bpf@vger.kernel.org \
--cc=sashiko@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox