* [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock
@ 2026-03-30 22:27 Mykyta Yatsenko
2026-03-30 22:27 ` [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Mykyta Yatsenko @ 2026-03-30 22:27 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
Now that kmalloc can be used from NMI context via kmalloc_nolock(),
migrate BPF internal allocations away from bpf_mem_alloc to use the
standard slab allocator.
Use kfree_rcu() for deferred freeing, which waits for a regular RCU
grace period before the memory is reclaimed. Sleepable BPF programs
hold rcu_read_lock_trace but not regular rcu_read_lock, so patch 1
adds explicit rcu_read_lock/unlock around the pointer-to-refcount
window to prevent kfree_rcu from freeing memory while a sleepable
program is still between reading the pointer and acquiring a
reference.
Patch 1 migrates bpf_task_work_ctx from bpf_mem_alloc/bpf_mem_free to
kmalloc_nolock/kfree_rcu.
Patch 2 migrates bpf_dynptr_file_impl from bpf_mem_alloc/bpf_mem_free
to kmalloc_nolock/kfree.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Changes in v2:
- Switch to scoped_guard in patch 1 (Kumar)
- Remove rcu gp wait in patch 2 (Kumar)
- Defer to irq_work when irqs disabled in patch 1
- use bpf_map_kmalloc_nolock() for bpf_task_work
- use kmalloc_nolock() for file dynptr
- Link to v1: https://lore.kernel.org/all/20260325-kmalloc_special-v1-0-269666afb1ea@meta.com/
---
Mykyta Yatsenko (2):
bpf: Migrate bpf_task_work to kmalloc_nolock
bpf: Migrate dynptr file to kmalloc_nolock
kernel/bpf/helpers.c | 60 ++++++++++++++++++++++++++++++++++------------------
1 file changed, 39 insertions(+), 21 deletions(-)
---
base-commit: 9f7d8fa6817e2709846fc7f5c9f60254e536d138
change-id: 20260223-kmalloc_special-933ec4c543d7
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock 2026-03-30 22:27 [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko @ 2026-03-30 22:27 ` Mykyta Yatsenko 2026-03-31 0:00 ` Andrii Nakryiko 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 2026-03-30 22:27 ` [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko 2026-04-02 16:40 ` [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr " patchwork-bot+netdevbpf 2 siblings, 2 replies; 9+ messages in thread From: Mykyta Yatsenko @ 2026-03-30 22:27 UTC (permalink / raw) To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor Cc: Mykyta Yatsenko From: Mykyta Yatsenko <yatsenko@meta.com> Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_rcu for bpf_task_work_ctx. Replace guard(rcu_tasks_trace)() with guard(rcu)() in bpf_task_work_irq(). The function only accesses ctx struct members (not map values), so tasks trace protection is not needed - regular RCU is sufficient since ctx is freed via kfree_rcu. The guard in bpf_task_work_callback() remains as tasks trace since it accesses map values from process context. Sleepable BPF programs hold rcu_read_lock_trace but not regular rcu_read_lock. Since kfree_rcu waits for a regular RCU grace period, the ctx memory can be freed while a sleepable program is still running. Add scoped_guard(rcu) around the pointer read and refcount tryget in bpf_task_work_acquire_ctx to close this race window. Since kfree_rcu uses call_rcu internally which is not safe from NMI context, defer destruction via irq_work when irqs are disabled. For the lost-cmpxchg path the ctx was never published, so kfree_nolock is safe. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> --- kernel/bpf/helpers.c | 56 ++++++++++++++++++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index cb6d242bd093..4c3011ef631f 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -4165,17 +4165,25 @@ static bool bpf_task_work_ctx_tryget(struct bpf_task_work_ctx *ctx) return refcount_inc_not_zero(&ctx->refcnt); } +static void bpf_task_work_destroy(struct irq_work *irq_work) +{ + struct bpf_task_work_ctx *ctx = container_of(irq_work, struct bpf_task_work_ctx, irq_work); + + bpf_task_work_ctx_reset(ctx); + kfree_rcu(ctx, rcu); +} + static void bpf_task_work_ctx_put(struct bpf_task_work_ctx *ctx) { if (!refcount_dec_and_test(&ctx->refcnt)) return; - bpf_task_work_ctx_reset(ctx); - - /* bpf_mem_free expects migration to be disabled */ - migrate_disable(); - bpf_mem_free(&bpf_global_ma, ctx); - migrate_enable(); + if (irqs_disabled()) { + ctx->irq_work = IRQ_WORK_INIT(bpf_task_work_destroy); + irq_work_queue(&ctx->irq_work); + } else { + bpf_task_work_destroy(&ctx->irq_work); + } } static void bpf_task_work_cancel(struct bpf_task_work_ctx *ctx) @@ -4229,7 +4237,7 @@ static void bpf_task_work_irq(struct irq_work *irq_work) enum bpf_task_work_state state; int err; - guard(rcu_tasks_trace)(); + guard(rcu)(); if (cmpxchg(&ctx->state, BPF_TW_PENDING, BPF_TW_SCHEDULING) != BPF_TW_PENDING) { bpf_task_work_ctx_put(ctx); @@ -4251,9 +4259,9 @@ static void bpf_task_work_irq(struct irq_work *irq_work) /* * It's technically possible for just scheduled task_work callback to * complete running by now, going SCHEDULING -> RUNNING and then - * dropping its ctx refcount. Instead of capturing extra ref just to - * protected below ctx->state access, we rely on RCU protection to - * perform below SCHEDULING -> SCHEDULED attempt. + * dropping its ctx refcount. Instead of capturing an extra ref just + * to protect below ctx->state access, we rely on rcu_read_lock + * above to prevent kfree_rcu from freeing ctx before we return. */ state = cmpxchg(&ctx->state, BPF_TW_SCHEDULING, BPF_TW_SCHEDULED); if (state == BPF_TW_FREED) @@ -4270,7 +4278,7 @@ static struct bpf_task_work_ctx *bpf_task_work_fetch_ctx(struct bpf_task_work *t if (ctx) return ctx; - ctx = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_task_work_ctx)); + ctx = bpf_map_kmalloc_nolock(map, sizeof(*ctx), 0, NUMA_NO_NODE); if (!ctx) return ERR_PTR(-ENOMEM); @@ -4284,7 +4292,7 @@ static struct bpf_task_work_ctx *bpf_task_work_fetch_ctx(struct bpf_task_work *t * tw->ctx is set by concurrent BPF program, release allocated * memory and try to reuse already set context. */ - bpf_mem_free(&bpf_global_ma, ctx); + kfree_nolock(ctx); return old_ctx; } @@ -4296,13 +4304,23 @@ static struct bpf_task_work_ctx *bpf_task_work_acquire_ctx(struct bpf_task_work { struct bpf_task_work_ctx *ctx; - ctx = bpf_task_work_fetch_ctx(tw, map); - if (IS_ERR(ctx)) - return ctx; - - /* try to get ref for task_work callback to hold */ - if (!bpf_task_work_ctx_tryget(ctx)) - return ERR_PTR(-EBUSY); + /* + * Sleepable BPF programs hold rcu_read_lock_trace but not + * regular rcu_read_lock. Since kfree_rcu waits for regular + * RCU GP, the ctx can be freed while we're between reading + * the pointer and incrementing the refcount. Take regular + * rcu_read_lock to prevent kfree_rcu from freeing the ctx + * before we can tryget it. + */ + scoped_guard(rcu) { + ctx = bpf_task_work_fetch_ctx(tw, map); + if (IS_ERR(ctx)) + return ctx; + + /* try to get ref for task_work callback to hold */ + if (!bpf_task_work_ctx_tryget(ctx)) + return ERR_PTR(-EBUSY); + } if (cmpxchg(&ctx->state, BPF_TW_STANDBY, BPF_TW_PENDING) != BPF_TW_STANDBY) { /* lost acquiring race or map_release_uref() stole it from us, put ref and bail */ -- 2.52.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock 2026-03-30 22:27 ` [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko @ 2026-03-31 0:00 ` Andrii Nakryiko 2026-03-31 10:29 ` Mykyta Yatsenko 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 1 sibling, 1 reply; 9+ messages in thread From: Andrii Nakryiko @ 2026-03-31 0:00 UTC (permalink / raw) To: Mykyta Yatsenko Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor, Mykyta Yatsenko On Mon, Mar 30, 2026 at 3:28 PM Mykyta Yatsenko <mykyta.yatsenko5@gmail.com> wrote: > > From: Mykyta Yatsenko <yatsenko@meta.com> > > Replace bpf_mem_alloc/bpf_mem_free with > kmalloc_nolock/kfree_rcu for bpf_task_work_ctx. > > Replace guard(rcu_tasks_trace)() with guard(rcu)() in > bpf_task_work_irq(). The function only accesses ctx struct members > (not map values), so tasks trace protection is not needed - regular > RCU is sufficient since ctx is freed via kfree_rcu. The guard in > bpf_task_work_callback() remains as tasks trace since it accesses map > values from process context. I didn't quite get if this change was necessary for correctness or it's just an optimization? > > Sleepable BPF programs hold rcu_read_lock_trace but not > regular rcu_read_lock. Since kfree_rcu > waits for a regular RCU grace period, the ctx memory can be freed > while a sleepable program is still running. Add scoped_guard(rcu) > around the pointer read and refcount tryget in > bpf_task_work_acquire_ctx to close this race window. > > Since kfree_rcu uses call_rcu internally which is not safe from > NMI context, defer destruction via irq_work when irqs are disabled. > > For the lost-cmpxchg path the ctx was never published, so > kfree_nolock is safe. > > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> > --- > kernel/bpf/helpers.c | 56 ++++++++++++++++++++++++++++++++++------------------ > 1 file changed, 37 insertions(+), 19 deletions(-) > [...] ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock 2026-03-31 0:00 ` Andrii Nakryiko @ 2026-03-31 10:29 ` Mykyta Yatsenko 0 siblings, 0 replies; 9+ messages in thread From: Mykyta Yatsenko @ 2026-03-31 10:29 UTC (permalink / raw) To: Andrii Nakryiko Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor, Mykyta Yatsenko On 3/31/26 1:00 AM, Andrii Nakryiko wrote: > On Mon, Mar 30, 2026 at 3:28 PM Mykyta Yatsenko > <mykyta.yatsenko5@gmail.com> wrote: >> >> From: Mykyta Yatsenko <yatsenko@meta.com> >> >> Replace bpf_mem_alloc/bpf_mem_free with >> kmalloc_nolock/kfree_rcu for bpf_task_work_ctx. >> >> Replace guard(rcu_tasks_trace)() with guard(rcu)() in >> bpf_task_work_irq(). The function only accesses ctx struct members >> (not map values), so tasks trace protection is not needed - regular >> RCU is sufficient since ctx is freed via kfree_rcu. The guard in >> bpf_task_work_callback() remains as tasks trace since it accesses map >> values from process context. > > I didn't quite get if this change was necessary for correctness or > it's just an optimization? > Correctness - ctx is freed via kfree_rcu(), so we need to hold rcu read lock when we pass refcnt to the task_work_add() callback. It worked before on tasks trace rcu because bpf_mem_alloc() used it (with normal rcu chaining) before freeing the ctx. >> >> Sleepable BPF programs hold rcu_read_lock_trace but not >> regular rcu_read_lock. Since kfree_rcu >> waits for a regular RCU grace period, the ctx memory can be freed >> while a sleepable program is still running. Add scoped_guard(rcu) >> around the pointer read and refcount tryget in >> bpf_task_work_acquire_ctx to close this race window. >> >> Since kfree_rcu uses call_rcu internally which is not safe from >> NMI context, defer destruction via irq_work when irqs are disabled. >> >> For the lost-cmpxchg path the ctx was never published, so >> kfree_nolock is safe. >> >> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> >> --- >> kernel/bpf/helpers.c | 56 ++++++++++++++++++++++++++++++++++------------------ >> 1 file changed, 37 insertions(+), 19 deletions(-) >> > > [...] ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock 2026-03-30 22:27 ` [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko 2026-03-31 0:00 ` Andrii Nakryiko @ 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 1 sibling, 0 replies; 9+ messages in thread From: Kumar Kartikeya Dwivedi @ 2026-03-31 0:58 UTC (permalink / raw) To: Mykyta Yatsenko Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, Mykyta Yatsenko On Tue, 31 Mar 2026 at 00:28, Mykyta Yatsenko <mykyta.yatsenko5@gmail.com> wrote: > > From: Mykyta Yatsenko <yatsenko@meta.com> > > Replace bpf_mem_alloc/bpf_mem_free with > kmalloc_nolock/kfree_rcu for bpf_task_work_ctx. > > Replace guard(rcu_tasks_trace)() with guard(rcu)() in > bpf_task_work_irq(). The function only accesses ctx struct members > (not map values), so tasks trace protection is not needed - regular > RCU is sufficient since ctx is freed via kfree_rcu. The guard in > bpf_task_work_callback() remains as tasks trace since it accesses map > values from process context. > > Sleepable BPF programs hold rcu_read_lock_trace but not > regular rcu_read_lock. Since kfree_rcu > waits for a regular RCU grace period, the ctx memory can be freed > while a sleepable program is still running. Add scoped_guard(rcu) > around the pointer read and refcount tryget in > bpf_task_work_acquire_ctx to close this race window. > > Since kfree_rcu uses call_rcu internally which is not safe from > NMI context, defer destruction via irq_work when irqs are disabled. > > For the lost-cmpxchg path the ctx was never published, so > kfree_nolock is safe. > > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> > --- Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> > [...] ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file to kmalloc_nolock 2026-03-30 22:27 [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko 2026-03-30 22:27 ` [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko @ 2026-03-30 22:27 ` Mykyta Yatsenko 2026-03-31 0:01 ` Andrii Nakryiko 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 2026-04-02 16:40 ` [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr " patchwork-bot+netdevbpf 2 siblings, 2 replies; 9+ messages in thread From: Mykyta Yatsenko @ 2026-03-30 22:27 UTC (permalink / raw) To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor Cc: Mykyta Yatsenko From: Mykyta Yatsenko <yatsenko@meta.com> Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_nolock for bpf_dynptr_file_impl, continuing the migration away from bpf_mem_alloc now that kmalloc can be used from NMI context. freader_cleanup() runs before kfree_nolock() while the dynptr still holds exclusive access, so plain kfree_nolock() is safe — no concurrent readers can access the object. Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> --- kernel/bpf/helpers.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 4c3011ef631f..7bb8b1339e2f 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -4435,7 +4435,7 @@ static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep, return -EINVAL; } - state = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_dynptr_file_impl)); + state = kmalloc_nolock(sizeof(*state), 0, NUMA_NO_NODE); if (!state) { bpf_dynptr_set_null(ptr); return -ENOMEM; @@ -4467,7 +4467,7 @@ __bpf_kfunc int bpf_dynptr_file_discard(struct bpf_dynptr *dynptr) return 0; freader_cleanup(&df->freader); - bpf_mem_free(&bpf_global_ma, df); + kfree_nolock(df); bpf_dynptr_set_null(ptr); return 0; } -- 2.52.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file to kmalloc_nolock 2026-03-30 22:27 ` [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko @ 2026-03-31 0:01 ` Andrii Nakryiko 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 1 sibling, 0 replies; 9+ messages in thread From: Andrii Nakryiko @ 2026-03-31 0:01 UTC (permalink / raw) To: Mykyta Yatsenko Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor, Mykyta Yatsenko On Mon, Mar 30, 2026 at 3:28 PM Mykyta Yatsenko <mykyta.yatsenko5@gmail.com> wrote: > > From: Mykyta Yatsenko <yatsenko@meta.com> > > Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_nolock for > bpf_dynptr_file_impl, continuing the migration away from bpf_mem_alloc > now that kmalloc can be used from NMI context. > > freader_cleanup() runs before kfree_nolock() while the dynptr still > holds exclusive access, so plain kfree_nolock() is safe — no concurrent > readers can access the object. > > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> > --- > kernel/bpf/helpers.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > LGTM Acked-by: Andrii Nakryiko <andrii@kernel.org> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > index 4c3011ef631f..7bb8b1339e2f 100644 > --- a/kernel/bpf/helpers.c > +++ b/kernel/bpf/helpers.c > @@ -4435,7 +4435,7 @@ static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep, > return -EINVAL; > } > > - state = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_dynptr_file_impl)); > + state = kmalloc_nolock(sizeof(*state), 0, NUMA_NO_NODE); > if (!state) { > bpf_dynptr_set_null(ptr); > return -ENOMEM; > @@ -4467,7 +4467,7 @@ __bpf_kfunc int bpf_dynptr_file_discard(struct bpf_dynptr *dynptr) > return 0; > > freader_cleanup(&df->freader); > - bpf_mem_free(&bpf_global_ma, df); > + kfree_nolock(df); > bpf_dynptr_set_null(ptr); > return 0; > } > > -- > 2.52.0 > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file to kmalloc_nolock 2026-03-30 22:27 ` [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko 2026-03-31 0:01 ` Andrii Nakryiko @ 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 1 sibling, 0 replies; 9+ messages in thread From: Kumar Kartikeya Dwivedi @ 2026-03-31 0:58 UTC (permalink / raw) To: Mykyta Yatsenko Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, Mykyta Yatsenko On Tue, 31 Mar 2026 at 00:28, Mykyta Yatsenko <mykyta.yatsenko5@gmail.com> wrote: > > From: Mykyta Yatsenko <yatsenko@meta.com> > > Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_nolock for > bpf_dynptr_file_impl, continuing the migration away from bpf_mem_alloc > now that kmalloc can be used from NMI context. > > freader_cleanup() runs before kfree_nolock() while the dynptr still > holds exclusive access, so plain kfree_nolock() is safe — no concurrent > readers can access the object. > > Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com> > --- Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> > [...] ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock 2026-03-30 22:27 [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko 2026-03-30 22:27 ` [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko 2026-03-30 22:27 ` [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko @ 2026-04-02 16:40 ` patchwork-bot+netdevbpf 2 siblings, 0 replies; 9+ messages in thread From: patchwork-bot+netdevbpf @ 2026-04-02 16:40 UTC (permalink / raw) To: Mykyta Yatsenko Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor, yatsenko Hello: This series was applied to bpf/bpf-next.git (master) by Alexei Starovoitov <ast@kernel.org>: On Mon, 30 Mar 2026 15:27:55 -0700 you wrote: > Now that kmalloc can be used from NMI context via kmalloc_nolock(), > migrate BPF internal allocations away from bpf_mem_alloc to use the > standard slab allocator. > > Use kfree_rcu() for deferred freeing, which waits for a regular RCU > grace period before the memory is reclaimed. Sleepable BPF programs > hold rcu_read_lock_trace but not regular rcu_read_lock, so patch 1 > adds explicit rcu_read_lock/unlock around the pointer-to-refcount > window to prevent kfree_rcu from freeing memory while a sleepable > program is still between reading the pointer and acquiring a > reference. > > [...] Here is the summary with links: - [bpf-next,v2,1/2] bpf: Migrate bpf_task_work to kmalloc_nolock https://git.kernel.org/bpf/bpf-next/c/90f51ebff242 - [bpf-next,v2,2/2] bpf: Migrate dynptr file to kmalloc_nolock https://git.kernel.org/bpf/bpf-next/c/cc878b414450 You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-04-02 16:40 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-03-30 22:27 [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko 2026-03-30 22:27 ` [PATCH bpf-next v2 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko 2026-03-31 0:00 ` Andrii Nakryiko 2026-03-31 10:29 ` Mykyta Yatsenko 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 2026-03-30 22:27 ` [PATCH bpf-next v2 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko 2026-03-31 0:01 ` Andrii Nakryiko 2026-03-31 0:58 ` Kumar Kartikeya Dwivedi 2026-04-02 16:40 ` [PATCH bpf-next v2 0/2] bpf: Migrate bpf_task_work and file dynptr " patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox