* [PATCH 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock
@ 2026-03-25 21:11 Mykyta Yatsenko
2026-03-25 21:11 ` [PATCH 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko
2026-03-25 21:11 ` [PATCH 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko
0 siblings, 2 replies; 7+ messages in thread
From: Mykyta Yatsenko @ 2026-03-25 21:11 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
Now that kmalloc can be used from NMI context via kmalloc_nolock(),
migrate BPF internal allocations away from bpf_mem_alloc to use the
standard slab allocator.
Use kfree_rcu() for deferred freeing, which waits for a regular RCU
grace period before the memory is reclaimed. Sleepable BPF programs
hold rcu_read_lock_trace but not regular rcu_read_lock, so patch 1
adds explicit rcu_read_lock/unlock around the pointer-to-refcount
window to prevent kfree_rcu from freeing memory while a sleepable
program is still between reading the pointer and acquiring a
reference.
Patch 1 migrates bpf_task_work_ctx from bpf_mem_alloc/bpf_mem_free to
kmalloc_nolock/kfree_rcu.
Patch 2 migrates bpf_dynptr_file_impl from bpf_mem_alloc/bpf_mem_free
to kmalloc_nolock/kfree_rcu.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Mykyta Yatsenko (2):
bpf: Migrate bpf_task_work to kmalloc_nolock
bpf: Migrate dynptr file to kmalloc_nolock
kernel/bpf/helpers.c | 45 +++++++++++++++++++++++++++++----------------
1 file changed, 29 insertions(+), 16 deletions(-)
---
base-commit: 9f7d8fa6817e2709846fc7f5c9f60254e536d138
change-id: 20260223-kmalloc_special-933ec4c543d7
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock
2026-03-25 21:11 [PATCH 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko
@ 2026-03-25 21:11 ` Mykyta Yatsenko
2026-03-27 3:41 ` Kumar Kartikeya Dwivedi
2026-03-25 21:11 ` [PATCH 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko
1 sibling, 1 reply; 7+ messages in thread
From: Mykyta Yatsenko @ 2026-03-25 21:11 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Replace bpf_mem_alloc/bpf_mem_free with
kmalloc_nolock/kfree_rcu for bpf_task_work_ctx.
Replace guard(rcu_tasks_trace)() with guard(rcu)() in
bpf_task_work_irq(). The function only accesses ctx struct members
(not map values), so tasks trace protection is not needed - regular
RCU is sufficient since ctx is freed via kfree_rcu. The guard in
bpf_task_work_callback() remains as tasks trace since it accesses map
values from process context.
Sleepable BPF programs (e.g. BPF_PROG_TYPE_SYSCALL) hold
rcu_read_lock_trace but not regular rcu_read_lock. Since kfree_rcu
waits for a regular RCU grace period, the ctx memory can be freed
while a sleepable program is still running. Add explicit
rcu_read_lock/unlock around the pointer read and refcount tryget in
bpf_task_work_acquire_ctx to close this race window.
For the lost-cmpxchg path the ctx was never published, so plain kfree
is safe.
---
kernel/bpf/helpers.c | 36 +++++++++++++++++++++++-------------
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index cb6d242bd093..b197b6978f1a 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -4171,11 +4171,7 @@ static void bpf_task_work_ctx_put(struct bpf_task_work_ctx *ctx)
return;
bpf_task_work_ctx_reset(ctx);
-
- /* bpf_mem_free expects migration to be disabled */
- migrate_disable();
- bpf_mem_free(&bpf_global_ma, ctx);
- migrate_enable();
+ kfree_rcu(ctx, rcu);
}
static void bpf_task_work_cancel(struct bpf_task_work_ctx *ctx)
@@ -4229,7 +4225,7 @@ static void bpf_task_work_irq(struct irq_work *irq_work)
enum bpf_task_work_state state;
int err;
- guard(rcu_tasks_trace)();
+ guard(rcu)();
if (cmpxchg(&ctx->state, BPF_TW_PENDING, BPF_TW_SCHEDULING) != BPF_TW_PENDING) {
bpf_task_work_ctx_put(ctx);
@@ -4251,9 +4247,9 @@ static void bpf_task_work_irq(struct irq_work *irq_work)
/*
* It's technically possible for just scheduled task_work callback to
* complete running by now, going SCHEDULING -> RUNNING and then
- * dropping its ctx refcount. Instead of capturing extra ref just to
- * protected below ctx->state access, we rely on RCU protection to
- * perform below SCHEDULING -> SCHEDULED attempt.
+ * dropping its ctx refcount. Instead of capturing an extra ref just
+ * to protect below ctx->state access, we rely on rcu_read_lock
+ * above to prevent kfree_rcu from freeing ctx before we return.
*/
state = cmpxchg(&ctx->state, BPF_TW_SCHEDULING, BPF_TW_SCHEDULED);
if (state == BPF_TW_FREED)
@@ -4270,7 +4266,7 @@ static struct bpf_task_work_ctx *bpf_task_work_fetch_ctx(struct bpf_task_work *t
if (ctx)
return ctx;
- ctx = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_task_work_ctx));
+ ctx = kmalloc_nolock(sizeof(*ctx), 0, NUMA_NO_NODE);
if (!ctx)
return ERR_PTR(-ENOMEM);
@@ -4284,7 +4280,7 @@ static struct bpf_task_work_ctx *bpf_task_work_fetch_ctx(struct bpf_task_work *t
* tw->ctx is set by concurrent BPF program, release allocated
* memory and try to reuse already set context.
*/
- bpf_mem_free(&bpf_global_ma, ctx);
+ kfree(ctx);
return old_ctx;
}
@@ -4296,13 +4292,27 @@ static struct bpf_task_work_ctx *bpf_task_work_acquire_ctx(struct bpf_task_work
{
struct bpf_task_work_ctx *ctx;
+ /*
+ * Sleepable BPF programs hold rcu_read_lock_trace but not
+ * regular rcu_read_lock. Since kfree_rcu waits for regular
+ * RCU GP, the ctx can be freed while we're between reading
+ * the pointer and incrementing the refcount. Take regular
+ * rcu_read_lock to prevent kfree_rcu from freeing the ctx
+ * before we can tryget it.
+ */
+ rcu_read_lock();
ctx = bpf_task_work_fetch_ctx(tw, map);
- if (IS_ERR(ctx))
+ if (IS_ERR(ctx)) {
+ rcu_read_unlock();
return ctx;
+ }
/* try to get ref for task_work callback to hold */
- if (!bpf_task_work_ctx_tryget(ctx))
+ if (!bpf_task_work_ctx_tryget(ctx)) {
+ rcu_read_unlock();
return ERR_PTR(-EBUSY);
+ }
+ rcu_read_unlock();
if (cmpxchg(&ctx->state, BPF_TW_STANDBY, BPF_TW_PENDING) != BPF_TW_STANDBY) {
/* lost acquiring race or map_release_uref() stole it from us, put ref and bail */
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH 2/2] bpf: Migrate dynptr file to kmalloc_nolock
2026-03-25 21:11 [PATCH 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko
2026-03-25 21:11 ` [PATCH 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko
@ 2026-03-25 21:11 ` Mykyta Yatsenko
2026-03-27 3:37 ` Kumar Kartikeya Dwivedi
1 sibling, 1 reply; 7+ messages in thread
From: Mykyta Yatsenko @ 2026-03-25 21:11 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_rcu for
bpf_dynptr_file_impl, continuing the migration away from bpf_mem_alloc
now that kmalloc can be used from NMI context.
freader_cleanup() runs before kfree_rcu() while the dynptr still holds
exclusive access. kfree_rcu() then defers the actual free until after
a grace period.
Add struct rcu_head to bpf_dynptr_file_impl for kfree_rcu().
---
kernel/bpf/helpers.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index b197b6978f1a..b349c8a34e50 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1736,7 +1736,10 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = {
};
struct bpf_dynptr_file_impl {
- struct freader freader;
+ union {
+ struct freader freader;
+ struct rcu_head rcu;
+ };
/* 64 bit offset and size overriding 32 bit ones in bpf_dynptr_kern */
u64 offset;
u64 size;
@@ -4427,7 +4430,7 @@ static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep,
return -EINVAL;
}
- state = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_dynptr_file_impl));
+ state = kmalloc_nolock(sizeof(*state), 0, NUMA_NO_NODE);
if (!state) {
bpf_dynptr_set_null(ptr);
return -ENOMEM;
@@ -4459,7 +4462,7 @@ __bpf_kfunc int bpf_dynptr_file_discard(struct bpf_dynptr *dynptr)
return 0;
freader_cleanup(&df->freader);
- bpf_mem_free(&bpf_global_ma, df);
+ kfree_rcu(df, rcu);
bpf_dynptr_set_null(ptr);
return 0;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] bpf: Migrate dynptr file to kmalloc_nolock
2026-03-25 21:11 ` [PATCH 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko
@ 2026-03-27 3:37 ` Kumar Kartikeya Dwivedi
2026-03-27 14:46 ` Mykyta Yatsenko
0 siblings, 1 reply; 7+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-27 3:37 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
On Wed, 25 Mar 2026 at 22:12, Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_rcu for
> bpf_dynptr_file_impl, continuing the migration away from bpf_mem_alloc
> now that kmalloc can be used from NMI context.
>
> freader_cleanup() runs before kfree_rcu() while the dynptr still holds
> exclusive access. kfree_rcu() then defers the actual free until after
> a grace period.
>
> Add struct rcu_head to bpf_dynptr_file_impl for kfree_rcu().
> ---
> kernel/bpf/helpers.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index b197b6978f1a..b349c8a34e50 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -1736,7 +1736,10 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = {
> };
>
> struct bpf_dynptr_file_impl {
> - struct freader freader;
> + union {
> + struct freader freader;
> + struct rcu_head rcu;
> + };
Sorry, this is confusing to me. Why do we need RCU gp wait before freeing here?
bpf_mem_free() didn't do any RCU gp before.
> /* 64 bit offset and size overriding 32 bit ones in bpf_dynptr_kern */
> u64 offset;
> u64 size;
> @@ -4427,7 +4430,7 @@ static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep,
> return -EINVAL;
> }
>
> - state = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_dynptr_file_impl));
> + state = kmalloc_nolock(sizeof(*state), 0, NUMA_NO_NODE);
> if (!state) {
> bpf_dynptr_set_null(ptr);
> return -ENOMEM;
> @@ -4459,7 +4462,7 @@ __bpf_kfunc int bpf_dynptr_file_discard(struct bpf_dynptr *dynptr)
> return 0;
>
> freader_cleanup(&df->freader);
> - bpf_mem_free(&bpf_global_ma, df);
> + kfree_rcu(df, rcu);
> bpf_dynptr_set_null(ptr);
> return 0;
> }
>
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock
2026-03-25 21:11 ` [PATCH 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko
@ 2026-03-27 3:41 ` Kumar Kartikeya Dwivedi
2026-03-27 14:36 ` Mykyta Yatsenko
0 siblings, 1 reply; 7+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-27 3:41 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
nit: I think you need to target bpf-next in next respin, patch subject
is incorrect.
On Wed, 25 Mar 2026 at 22:12, Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Replace bpf_mem_alloc/bpf_mem_free with
> kmalloc_nolock/kfree_rcu for bpf_task_work_ctx.
>
> Replace guard(rcu_tasks_trace)() with guard(rcu)() in
> bpf_task_work_irq(). The function only accesses ctx struct members
> (not map values), so tasks trace protection is not needed - regular
> RCU is sufficient since ctx is freed via kfree_rcu. The guard in
> bpf_task_work_callback() remains as tasks trace since it accesses map
> values from process context.
>
I think a comment in both places would be useful. Also, this bit can
(should?) probably be a separate patch preceding the conversion.
> Sleepable BPF programs (e.g. BPF_PROG_TYPE_SYSCALL) hold
> rcu_read_lock_trace but not regular rcu_read_lock. Since kfree_rcu
> waits for a regular RCU grace period, the ctx memory can be freed
> while a sleepable program is still running. Add explicit
> rcu_read_lock/unlock around the pointer read and refcount tryget in
> bpf_task_work_acquire_ctx to close this race window.
>
> For the lost-cmpxchg path the ctx was never published, so plain kfree
> is safe.
> ---
> [...]
>
> @@ -4296,13 +4292,27 @@ static struct bpf_task_work_ctx *bpf_task_work_acquire_ctx(struct bpf_task_work
> {
> struct bpf_task_work_ctx *ctx;
>
> + /*
> + * Sleepable BPF programs hold rcu_read_lock_trace but not
> + * regular rcu_read_lock. Since kfree_rcu waits for regular
> + * RCU GP, the ctx can be freed while we're between reading
> + * the pointer and incrementing the refcount. Take regular
> + * rcu_read_lock to prevent kfree_rcu from freeing the ctx
> + * before we can tryget it.
> + */
> + rcu_read_lock();
> ctx = bpf_task_work_fetch_ctx(tw, map);
> - if (IS_ERR(ctx))
> + if (IS_ERR(ctx)) {
> + rcu_read_unlock();
> return ctx;
> + }
>
> /* try to get ref for task_work callback to hold */
> - if (!bpf_task_work_ctx_tryget(ctx))
> + if (!bpf_task_work_ctx_tryget(ctx)) {
> + rcu_read_unlock();
> return ERR_PTR(-EBUSY);
> + }
> + rcu_read_unlock();
nit: This might look cleaner with explicit block {} and guard(rcu)() inside?
>
> if (cmpxchg(&ctx->state, BPF_TW_STANDBY, BPF_TW_PENDING) != BPF_TW_STANDBY) {
> /* lost acquiring race or map_release_uref() stole it from us, put ref and bail */
>
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] bpf: Migrate bpf_task_work to kmalloc_nolock
2026-03-27 3:41 ` Kumar Kartikeya Dwivedi
@ 2026-03-27 14:36 ` Mykyta Yatsenko
0 siblings, 0 replies; 7+ messages in thread
From: Mykyta Yatsenko @ 2026-03-27 14:36 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:
> nit: I think you need to target bpf-next in next respin, patch subject
> is incorrect.
Thanks, forgot to apply bpf-next prefix.
>
> On Wed, 25 Mar 2026 at 22:12, Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Replace bpf_mem_alloc/bpf_mem_free with
>> kmalloc_nolock/kfree_rcu for bpf_task_work_ctx.
>>
>> Replace guard(rcu_tasks_trace)() with guard(rcu)() in
>> bpf_task_work_irq(). The function only accesses ctx struct members
>> (not map values), so tasks trace protection is not needed - regular
>> RCU is sufficient since ctx is freed via kfree_rcu. The guard in
>> bpf_task_work_callback() remains as tasks trace since it accesses map
>> values from process context.
>>
>
> I think a comment in both places would be useful. Also, this bit can
> (should?) probably be a separate patch preceding the conversion.
Do you mean a separate patch to migrate from rcu TT to rcu? I'm not sure
it's worth it, it's not a problem right now, because bpf_mem_free()
actually frees memory only after both TT and normal rcus. But because we
are moving to kfree_rcu() it should be paired with rcu guard, because
now free does not wait for rcu TT.
>
>> Sleepable BPF programs (e.g. BPF_PROG_TYPE_SYSCALL) hold
>> rcu_read_lock_trace but not regular rcu_read_lock. Since kfree_rcu
>> waits for a regular RCU grace period, the ctx memory can be freed
>> while a sleepable program is still running. Add explicit
>> rcu_read_lock/unlock around the pointer read and refcount tryget in
>> bpf_task_work_acquire_ctx to close this race window.
>>
>> For the lost-cmpxchg path the ctx was never published, so plain kfree
>> is safe.
>> ---
>> [...]
>>
>> @@ -4296,13 +4292,27 @@ static struct bpf_task_work_ctx *bpf_task_work_acquire_ctx(struct bpf_task_work
>> {
>> struct bpf_task_work_ctx *ctx;
>>
>> + /*
>> + * Sleepable BPF programs hold rcu_read_lock_trace but not
>> + * regular rcu_read_lock. Since kfree_rcu waits for regular
>> + * RCU GP, the ctx can be freed while we're between reading
>> + * the pointer and incrementing the refcount. Take regular
>> + * rcu_read_lock to prevent kfree_rcu from freeing the ctx
>> + * before we can tryget it.
>> + */
>> + rcu_read_lock();
>> ctx = bpf_task_work_fetch_ctx(tw, map);
>> - if (IS_ERR(ctx))
>> + if (IS_ERR(ctx)) {
>> + rcu_read_unlock();
>> return ctx;
>> + }
>>
>> /* try to get ref for task_work callback to hold */
>> - if (!bpf_task_work_ctx_tryget(ctx))
>> + if (!bpf_task_work_ctx_tryget(ctx)) {
>> + rcu_read_unlock();
>> return ERR_PTR(-EBUSY);
>> + }
>> + rcu_read_unlock();
>
> nit: This might look cleaner with explicit block {} and guard(rcu)() inside?
>
yeah, I think you are right.
>>
>> if (cmpxchg(&ctx->state, BPF_TW_STANDBY, BPF_TW_PENDING) != BPF_TW_STANDBY) {
>> /* lost acquiring race or map_release_uref() stole it from us, put ref and bail */
>>
>> --
>> 2.52.0
>>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] bpf: Migrate dynptr file to kmalloc_nolock
2026-03-27 3:37 ` Kumar Kartikeya Dwivedi
@ 2026-03-27 14:46 ` Mykyta Yatsenko
0 siblings, 0 replies; 7+ messages in thread
From: Mykyta Yatsenko @ 2026-03-27 14:46 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:
> On Wed, 25 Mar 2026 at 22:12, Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Replace bpf_mem_alloc/bpf_mem_free with kmalloc_nolock/kfree_rcu for
>> bpf_dynptr_file_impl, continuing the migration away from bpf_mem_alloc
>> now that kmalloc can be used from NMI context.
>>
>> freader_cleanup() runs before kfree_rcu() while the dynptr still holds
>> exclusive access. kfree_rcu() then defers the actual free until after
>> a grace period.
>>
>> Add struct rcu_head to bpf_dynptr_file_impl for kfree_rcu().
>> ---
>> kernel/bpf/helpers.c | 9 ++++++---
>> 1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
>> index b197b6978f1a..b349c8a34e50 100644
>> --- a/kernel/bpf/helpers.c
>> +++ b/kernel/bpf/helpers.c
>> @@ -1736,7 +1736,10 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = {
>> };
>>
>> struct bpf_dynptr_file_impl {
>> - struct freader freader;
>> + union {
>> + struct freader freader;
>> + struct rcu_head rcu;
>> + };
>
> Sorry, this is confusing to me. Why do we need RCU gp wait before freeing here?
> bpf_mem_free() didn't do any RCU gp before.
>
Double checked, I think you are right, there's no point to do
kfree_rcu() just kfree() should do, as concurrent access to dynptr
should not be possible and after discard() nothing should be able to
access it anyway. Thanks.
>> /* 64 bit offset and size overriding 32 bit ones in bpf_dynptr_kern */
>> u64 offset;
>> u64 size;
>> @@ -4427,7 +4430,7 @@ static int make_file_dynptr(struct file *file, u32 flags, bool may_sleep,
>> return -EINVAL;
>> }
>>
>> - state = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_dynptr_file_impl));
>> + state = kmalloc_nolock(sizeof(*state), 0, NUMA_NO_NODE);
>> if (!state) {
>> bpf_dynptr_set_null(ptr);
>> return -ENOMEM;
>> @@ -4459,7 +4462,7 @@ __bpf_kfunc int bpf_dynptr_file_discard(struct bpf_dynptr *dynptr)
>> return 0;
>>
>> freader_cleanup(&df->freader);
>> - bpf_mem_free(&bpf_global_ma, df);
>> + kfree_rcu(df, rcu);
>> bpf_dynptr_set_null(ptr);
>> return 0;
>> }
>>
>> --
>> 2.52.0
>>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-03-27 14:46 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-25 21:11 [PATCH 0/2] bpf: Migrate bpf_task_work and file dynptr to kmalloc_nolock Mykyta Yatsenko
2026-03-25 21:11 ` [PATCH 1/2] bpf: Migrate bpf_task_work " Mykyta Yatsenko
2026-03-27 3:41 ` Kumar Kartikeya Dwivedi
2026-03-27 14:36 ` Mykyta Yatsenko
2026-03-25 21:11 ` [PATCH 2/2] bpf: Migrate dynptr file " Mykyta Yatsenko
2026-03-27 3:37 ` Kumar Kartikeya Dwivedi
2026-03-27 14:46 ` Mykyta Yatsenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox