* [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
@ 2026-03-24 21:35 Jann Horn
2026-03-25 3:00 ` David Rientjes
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Jann Horn @ 2026-03-24 21:35 UTC (permalink / raw)
To: Vlastimil Babka, Harry Yoo, Andrew Morton
Cc: Hao Li, Christoph Lameter, David Rientjes, Roman Gushchin,
Paul E. McKenney, Joel Fernandes, Josh Triplett, Boqun Feng,
Uladzislau Rezki, Steven Rostedt, Mathieu Desnoyers,
Lai Jiangshan, Zqiang, Dmitry Vyukov, rcu, linux-mm, linux-kernel,
Jann Horn
Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
so that kernel fuzzers have an easier time finding use-after-free involving
kfree_rcu().
The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
callbacks and free objects as soon as possible (at a large performance
cost) so that kernel fuzzers and such have an easier time detecting
use-after-free bugs in objects with RCU lifetime.
CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
expedite; for example, the following testcase doesn't trigger a KASAN splat
when CONFIG_KVFREE_RCU_BATCHED is enabled:
```
struct foo_struct {
struct rcu_head rcu;
int a;
};
struct foo_struct *foo = kmalloc(sizeof(*foo),
GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
pr_info("%s: calling kfree_rcu()\n", __func__);
kfree_rcu(foo, rcu);
msleep(10);
pr_info("%s: start UAF access\n", __func__);
READ_ONCE(foo->a);
pr_info("%s: end UAF access\n", __func__);
```
Signed-off-by: Jann Horn <jannh@google.com>
---
mm/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/Kconfig b/mm/Kconfig
index ebd8ea353687..67a72fe89186 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -172,6 +172,7 @@ config SLUB
config KVFREE_RCU_BATCHED
def_bool y
depends on !SLUB_TINY && !TINY_RCU
+ depends on !RCU_STRICT_GRACE_PERIOD
config SLUB_TINY
bool "Configure for minimal memory footprint"
---
base-commit: b29fb8829bff243512bb8c8908fd39406f9fd4c3
change-id: 20260324-kasan-kfree-rcu-4e7f490237ef
--
Jann Horn <jannh@google.com>
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-24 21:35 [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period Jann Horn
@ 2026-03-25 3:00 ` David Rientjes
2026-03-25 3:02 ` Joel Fernandes
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: David Rientjes @ 2026-03-25 3:00 UTC (permalink / raw)
To: Jann Horn
Cc: Vlastimil Babka, Harry Yoo, Andrew Morton, Hao Li,
Christoph Lameter, Roman Gushchin, Paul E. McKenney,
Joel Fernandes, Josh Triplett, Boqun Feng, Uladzislau Rezki,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
Dmitry Vyukov, rcu, linux-mm, linux-kernel
On Tue, 24 Mar 2026, Jann Horn wrote:
> Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
> so that kernel fuzzers have an easier time finding use-after-free involving
> kfree_rcu().
>
> The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
> callbacks and free objects as soon as possible (at a large performance
> cost) so that kernel fuzzers and such have an easier time detecting
> use-after-free bugs in objects with RCU lifetime.
>
> CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
> RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
> expedite; for example, the following testcase doesn't trigger a KASAN splat
> when CONFIG_KVFREE_RCU_BATCHED is enabled:
> ```
> struct foo_struct {
> struct rcu_head rcu;
> int a;
> };
> struct foo_struct *foo = kmalloc(sizeof(*foo),
> GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
>
> pr_info("%s: calling kfree_rcu()\n", __func__);
> kfree_rcu(foo, rcu);
> msleep(10);
> pr_info("%s: start UAF access\n", __func__);
> READ_ONCE(foo->a);
> pr_info("%s: end UAF access\n", __func__);
> ```
>
> Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-24 21:35 [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period Jann Horn
2026-03-25 3:00 ` David Rientjes
@ 2026-03-25 3:02 ` Joel Fernandes
2026-03-25 5:54 ` Harry Yoo (Oracle)
2026-03-25 7:50 ` Vlastimil Babka (SUSE)
3 siblings, 0 replies; 8+ messages in thread
From: Joel Fernandes @ 2026-03-25 3:02 UTC (permalink / raw)
To: Jann Horn, Vlastimil Babka, Harry Yoo, Andrew Morton
Cc: Hao Li, Christoph Lameter, David Rientjes, Roman Gushchin,
Paul E. McKenney, Josh Triplett, Boqun Feng, Uladzislau Rezki,
Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang,
Dmitry Vyukov, rcu, linux-mm, linux-kernel
On 3/24/2026 5:35 PM, Jann Horn wrote:
> Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
> so that kernel fuzzers have an easier time finding use-after-free involving
> kfree_rcu().
>
> The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
> callbacks and free objects as soon as possible (at a large performance
> cost) so that kernel fuzzers and such have an easier time detecting
> use-after-free bugs in objects with RCU lifetime.
>
> CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
> RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
> expedite; for example, the following testcase doesn't trigger a KASAN splat
> when CONFIG_KVFREE_RCU_BATCHED is enabled:
> ```
> struct foo_struct {
> struct rcu_head rcu;
> int a;
> };
> struct foo_struct *foo = kmalloc(sizeof(*foo),
> GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
>
> pr_info("%s: calling kfree_rcu()\n", __func__);
> kfree_rcu(foo, rcu);
> msleep(10);
> pr_info("%s: start UAF access\n", __func__);
> READ_ONCE(foo->a);
> pr_info("%s: end UAF access\n", __func__);
> ```
>
> Signed-off-by: Jann Horn <jannh@google.com>
Reviewed-by: Joel Fernandes <joelagnelf@nvidia.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-24 21:35 [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period Jann Horn
2026-03-25 3:00 ` David Rientjes
2026-03-25 3:02 ` Joel Fernandes
@ 2026-03-25 5:54 ` Harry Yoo (Oracle)
2026-03-25 7:50 ` Vlastimil Babka (SUSE)
3 siblings, 0 replies; 8+ messages in thread
From: Harry Yoo (Oracle) @ 2026-03-25 5:54 UTC (permalink / raw)
To: Jann Horn
Cc: Vlastimil Babka, Harry Yoo, Andrew Morton, Hao Li,
Christoph Lameter, David Rientjes, Roman Gushchin,
Paul E. McKenney, Joel Fernandes, Josh Triplett, Boqun Feng,
Uladzislau Rezki, Steven Rostedt, Mathieu Desnoyers,
Lai Jiangshan, Zqiang, Dmitry Vyukov, rcu, linux-mm, linux-kernel
On Tue, Mar 24, 2026 at 10:35:12PM +0100, Jann Horn wrote:
> Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
> so that kernel fuzzers have an easier time finding use-after-free involving
> kfree_rcu().
>
> The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
> callbacks and free objects as soon as possible (at a large performance
> cost) so that kernel fuzzers and such have an easier time detecting
> use-after-free bugs in objects with RCU lifetime.
>
> CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
> RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
> expedite; for example, the following testcase doesn't trigger a KASAN splat
> when CONFIG_KVFREE_RCU_BATCHED is enabled:
> ```
> struct foo_struct {
> struct rcu_head rcu;
> int a;
> };
> struct foo_struct *foo = kmalloc(sizeof(*foo),
> GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
>
> pr_info("%s: calling kfree_rcu()\n", __func__);
> kfree_rcu(foo, rcu);
> msleep(10);
> pr_info("%s: start UAF access\n", __func__);
> READ_ONCE(foo->a);
> pr_info("%s: end UAF access\n", __func__);
> ```
>
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
Acked-by: Harry Yoo (Oracle) <harry@kernel.org>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-24 21:35 [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period Jann Horn
` (2 preceding siblings ...)
2026-03-25 5:54 ` Harry Yoo (Oracle)
@ 2026-03-25 7:50 ` Vlastimil Babka (SUSE)
2026-03-25 8:21 ` Harry Yoo (Oracle)
3 siblings, 1 reply; 8+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-03-25 7:50 UTC (permalink / raw)
To: Jann Horn, Harry Yoo, Andrew Morton
Cc: Hao Li, Christoph Lameter, David Rientjes, Roman Gushchin,
Paul E. McKenney, Joel Fernandes, Josh Triplett, Boqun Feng,
Uladzislau Rezki, Steven Rostedt, Mathieu Desnoyers,
Lai Jiangshan, Zqiang, Dmitry Vyukov, rcu, linux-mm, linux-kernel
On 3/24/26 22:35, Jann Horn wrote:
> Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
> so that kernel fuzzers have an easier time finding use-after-free involving
> kfree_rcu().
>
> The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
> callbacks and free objects as soon as possible (at a large performance
> cost) so that kernel fuzzers and such have an easier time detecting
> use-after-free bugs in objects with RCU lifetime.
>
> CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
> RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
> expedite; for example, the following testcase doesn't trigger a KASAN splat
> when CONFIG_KVFREE_RCU_BATCHED is enabled:
> ```
> struct foo_struct {
> struct rcu_head rcu;
> int a;
> };
> struct foo_struct *foo = kmalloc(sizeof(*foo),
> GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
>
> pr_info("%s: calling kfree_rcu()\n", __func__);
> kfree_rcu(foo, rcu);
> msleep(10);
> pr_info("%s: start UAF access\n", __func__);
> READ_ONCE(foo->a);
> pr_info("%s: end UAF access\n", __func__);
> ```
>
> Signed-off-by: Jann Horn <jannh@google.com>
Hm but with 7.0 we have sheaves everywhere including kmalloc caches, and
there's a percpu rcu_free sheaf collecting kfree_rcu'd objects. Only when
it's full it's submitted to call_rcu() where the callback rcu_free_sheaf()
runs slab_free_hook() including kasan hooks etc. If there's nothing filling
the rcu_free sheaf, the objects can sit there possibly indefinitely.
That means CONFIG_KVFREE_RCU_BATCHED now handles only the rare cases where
kfree_rcu() to the rcu_free sheaf fails (and I still owe it to Ulad to do
something about this).
So to complete the intent of this patch, we should perhaps also skip the
rcu_free sheaf with RCU_STRICT_GRACE_PERIOD? (or with !KVFREE_RCU_BATCHED
perhaps as it's also a form of batching).
But then I wonder if the testcase in the changelog appeared to be fixed with
this patch on a 7.0-rcX kernel (base-commit: below is rc3+) because by my
understanding it shouldn't have been. (unless there happened to be enough
kfree_rcu() activity on that cpu+kmalloc cache combination, so that the
rcu_free sheaf got submitted withing that msleep(10)).
> ---
> mm/Kconfig | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index ebd8ea353687..67a72fe89186 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -172,6 +172,7 @@ config SLUB
> config KVFREE_RCU_BATCHED
> def_bool y
> depends on !SLUB_TINY && !TINY_RCU
> + depends on !RCU_STRICT_GRACE_PERIOD
>
> config SLUB_TINY
> bool "Configure for minimal memory footprint"
>
> ---
> base-commit: b29fb8829bff243512bb8c8908fd39406f9fd4c3
> change-id: 20260324-kasan-kfree-rcu-4e7f490237ef
>
> --
> Jann Horn <jannh@google.com>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-25 7:50 ` Vlastimil Babka (SUSE)
@ 2026-03-25 8:21 ` Harry Yoo (Oracle)
2026-03-25 8:34 ` Vlastimil Babka (SUSE)
0 siblings, 1 reply; 8+ messages in thread
From: Harry Yoo (Oracle) @ 2026-03-25 8:21 UTC (permalink / raw)
To: Vlastimil Babka (SUSE)
Cc: Jann Horn, Andrew Morton, Hao Li, Christoph Lameter,
David Rientjes, Roman Gushchin, Paul E. McKenney, Joel Fernandes,
Josh Triplett, Boqun Feng, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang, Dmitry Vyukov, rcu,
linux-mm, linux-kernel
On Wed, Mar 25, 2026 at 08:50:07AM +0100, Vlastimil Babka (SUSE) wrote:
> On 3/24/26 22:35, Jann Horn wrote:
> > Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
> > so that kernel fuzzers have an easier time finding use-after-free involving
> > kfree_rcu().
> >
> > The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
> > callbacks and free objects as soon as possible (at a large performance
> > cost) so that kernel fuzzers and such have an easier time detecting
> > use-after-free bugs in objects with RCU lifetime.
> >
> > CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
> > RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
> > expedite; for example, the following testcase doesn't trigger a KASAN splat
> > when CONFIG_KVFREE_RCU_BATCHED is enabled:
> > ```
> > struct foo_struct {
> > struct rcu_head rcu;
> > int a;
> > };
> > struct foo_struct *foo = kmalloc(sizeof(*foo),
> > GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
> >
> > pr_info("%s: calling kfree_rcu()\n", __func__);
> > kfree_rcu(foo, rcu);
> > msleep(10);
> > pr_info("%s: start UAF access\n", __func__);
> > READ_ONCE(foo->a);
> > pr_info("%s: end UAF access\n", __func__);
> > ```
> >
> > Signed-off-by: Jann Horn <jannh@google.com>
>
> Hm but with 7.0 we have sheaves everywhere including kmalloc caches, and
> there's a percpu rcu_free sheaf collecting kfree_rcu'd objects.
Right, but only when CONFIG_KVFREE_RCU_BATCHED=y
> Only when
> it's full it's submitted to call_rcu() where the callback rcu_free_sheaf()
> runs slab_free_hook() including kasan hooks etc. If there's nothing filling
> the rcu_free sheaf, the objects can sit there possibly indefinitely.
Right.
> That means CONFIG_KVFREE_RCU_BATCHED now handles only the rare cases where
> kfree_rcu() to the rcu_free sheaf fails (and I still owe it to Ulad to do
> something about this).
Right.
> So to complete the intent of this patch, we should perhaps also skip the
> rcu_free sheaf with RCU_STRICT_GRACE_PERIOD? (or with !KVFREE_RCU_BATCHED
> perhaps as it's also a form of batching).
Maybe I'm missing something, but...
by making KVFREE_RCU_BATCHED depend on !RCU_STRICT_GRACE_PERIOD,
selecting RCU_STRICT_GRACE_PERIOD disables all uses of rcu_free sheaves?
kvfree_call_rcu() implementation on !KVFREE_RCU_BATCHED does not call
kfree_rcu_sheaf().
> But then I wonder if the testcase in the changelog appeared to be fixed with
> this patch on a 7.0-rcX kernel (base-commit: below is rc3+) because by my
> understanding it shouldn't have been. (unless there happened to be enough
> kfree_rcu() activity on that cpu+kmalloc cache combination, so that the
> rcu_free sheaf got submitted withing that msleep(10)).
>
> > ---
> > mm/Kconfig | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index ebd8ea353687..67a72fe89186 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -172,6 +172,7 @@ config SLUB
> > config KVFREE_RCU_BATCHED
> > def_bool y
> > depends on !SLUB_TINY && !TINY_RCU
> > + depends on !RCU_STRICT_GRACE_PERIOD
> >
> > config SLUB_TINY
> > bool "Configure for minimal memory footprint"
> >
> > ---
> > base-commit: b29fb8829bff243512bb8c8908fd39406f9fd4c3
> > change-id: 20260324-kasan-kfree-rcu-4e7f490237ef
> >
> > --
> > Jann Horn <jannh@google.com>
> >
>
>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-25 8:21 ` Harry Yoo (Oracle)
@ 2026-03-25 8:34 ` Vlastimil Babka (SUSE)
2026-03-25 8:41 ` Harry Yoo (Oracle)
0 siblings, 1 reply; 8+ messages in thread
From: Vlastimil Babka (SUSE) @ 2026-03-25 8:34 UTC (permalink / raw)
To: Harry Yoo (Oracle)
Cc: Jann Horn, Andrew Morton, Hao Li, Christoph Lameter,
David Rientjes, Roman Gushchin, Paul E. McKenney, Joel Fernandes,
Josh Triplett, Boqun Feng, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang, Dmitry Vyukov, rcu,
linux-mm, linux-kernel
On 3/25/26 09:21, Harry Yoo (Oracle) wrote:
> On Wed, Mar 25, 2026 at 08:50:07AM +0100, Vlastimil Babka (SUSE) wrote:
>> On 3/24/26 22:35, Jann Horn wrote:
>> > Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
>> > so that kernel fuzzers have an easier time finding use-after-free involving
>> > kfree_rcu().
>> >
>> > The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
>> > callbacks and free objects as soon as possible (at a large performance
>> > cost) so that kernel fuzzers and such have an easier time detecting
>> > use-after-free bugs in objects with RCU lifetime.
>> >
>> > CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
>> > RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
>> > expedite; for example, the following testcase doesn't trigger a KASAN splat
>> > when CONFIG_KVFREE_RCU_BATCHED is enabled:
>> > ```
>> > struct foo_struct {
>> > struct rcu_head rcu;
>> > int a;
>> > };
>> > struct foo_struct *foo = kmalloc(sizeof(*foo),
>> > GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
>> >
>> > pr_info("%s: calling kfree_rcu()\n", __func__);
>> > kfree_rcu(foo, rcu);
>> > msleep(10);
>> > pr_info("%s: start UAF access\n", __func__);
>> > READ_ONCE(foo->a);
>> > pr_info("%s: end UAF access\n", __func__);
>> > ```
>> >
>> > Signed-off-by: Jann Horn <jannh@google.com>
>>
>> Hm but with 7.0 we have sheaves everywhere including kmalloc caches, and
>> there's a percpu rcu_free sheaf collecting kfree_rcu'd objects.
>
> Right, but only when CONFIG_KVFREE_RCU_BATCHED=y
>
>> Only when
>> it's full it's submitted to call_rcu() where the callback rcu_free_sheaf()
>> runs slab_free_hook() including kasan hooks etc. If there's nothing filling
>> the rcu_free sheaf, the objects can sit there possibly indefinitely.
>
> Right.
>
>> That means CONFIG_KVFREE_RCU_BATCHED now handles only the rare cases where
>> kfree_rcu() to the rcu_free sheaf fails (and I still owe it to Ulad to do
>> something about this).
>
> Right.
>
>> So to complete the intent of this patch, we should perhaps also skip the
>> rcu_free sheaf with RCU_STRICT_GRACE_PERIOD? (or with !KVFREE_RCU_BATCHED
>> perhaps as it's also a form of batching).
>
> Maybe I'm missing something, but...
>
> by making KVFREE_RCU_BATCHED depend on !RCU_STRICT_GRACE_PERIOD,
> selecting RCU_STRICT_GRACE_PERIOD disables all uses of rcu_free sheaves?
>
> kvfree_call_rcu() implementation on !KVFREE_RCU_BATCHED does not call
> kfree_rcu_sheaf().
Ah yeah, I missed that there are two kvfree_call_rcu() implementations and
kfree_rcu_sheaf() is only used in the batched one. Sorry for the noise.
Will queue the patch
>> But then I wonder if the testcase in the changelog appeared to be fixed with
>> this patch on a 7.0-rcX kernel (base-commit: below is rc3+) because by my
>> understanding it shouldn't have been. (unless there happened to be enough
>> kfree_rcu() activity on that cpu+kmalloc cache combination, so that the
>> rcu_free sheaf got submitted withing that msleep(10)).
>>
>> > ---
>> > mm/Kconfig | 1 +
>> > 1 file changed, 1 insertion(+)
>> >
>> > diff --git a/mm/Kconfig b/mm/Kconfig
>> > index ebd8ea353687..67a72fe89186 100644
>> > --- a/mm/Kconfig
>> > +++ b/mm/Kconfig
>> > @@ -172,6 +172,7 @@ config SLUB
>> > config KVFREE_RCU_BATCHED
>> > def_bool y
>> > depends on !SLUB_TINY && !TINY_RCU
>> > + depends on !RCU_STRICT_GRACE_PERIOD
>> >
>> > config SLUB_TINY
>> > bool "Configure for minimal memory footprint"
>> >
>> > ---
>> > base-commit: b29fb8829bff243512bb8c8908fd39406f9fd4c3
>> > change-id: 20260324-kasan-kfree-rcu-4e7f490237ef
>> >
>> > --
>> > Jann Horn <jannh@google.com>
>> >
>>
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period
2026-03-25 8:34 ` Vlastimil Babka (SUSE)
@ 2026-03-25 8:41 ` Harry Yoo (Oracle)
0 siblings, 0 replies; 8+ messages in thread
From: Harry Yoo (Oracle) @ 2026-03-25 8:41 UTC (permalink / raw)
To: Vlastimil Babka (SUSE)
Cc: Jann Horn, Andrew Morton, Hao Li, Christoph Lameter,
David Rientjes, Roman Gushchin, Paul E. McKenney, Joel Fernandes,
Josh Triplett, Boqun Feng, Uladzislau Rezki, Steven Rostedt,
Mathieu Desnoyers, Lai Jiangshan, Zqiang, Dmitry Vyukov, rcu,
linux-mm, linux-kernel
On Wed, Mar 25, 2026 at 09:34:40AM +0100, Vlastimil Babka (SUSE) wrote:
> On 3/25/26 09:21, Harry Yoo (Oracle) wrote:
> > On Wed, Mar 25, 2026 at 08:50:07AM +0100, Vlastimil Babka (SUSE) wrote:
> >> On 3/24/26 22:35, Jann Horn wrote:
> >> > Disable CONFIG_KVFREE_RCU_BATCHED in CONFIG_RCU_STRICT_GRACE_PERIOD builds
> >> > so that kernel fuzzers have an easier time finding use-after-free involving
> >> > kfree_rcu().
> >> >
> >> > The intent behind CONFIG_RCU_STRICT_GRACE_PERIOD is that RCU should invoke
> >> > callbacks and free objects as soon as possible (at a large performance
> >> > cost) so that kernel fuzzers and such have an easier time detecting
> >> > use-after-free bugs in objects with RCU lifetime.
> >> >
> >> > CONFIG_KVFREE_RCU_BATCHED is a performance optimization that queues
> >> > RCU-freed objects in ways that CONFIG_RCU_STRICT_GRACE_PERIOD can't
> >> > expedite; for example, the following testcase doesn't trigger a KASAN splat
> >> > when CONFIG_KVFREE_RCU_BATCHED is enabled:
> >> > ```
> >> > struct foo_struct {
> >> > struct rcu_head rcu;
> >> > int a;
> >> > };
> >> > struct foo_struct *foo = kmalloc(sizeof(*foo),
> >> > GFP_KERNEL | __GFP_NOFAIL | __GFP_ZERO);
> >> >
> >> > pr_info("%s: calling kfree_rcu()\n", __func__);
> >> > kfree_rcu(foo, rcu);
> >> > msleep(10);
> >> > pr_info("%s: start UAF access\n", __func__);
> >> > READ_ONCE(foo->a);
> >> > pr_info("%s: end UAF access\n", __func__);
> >> > ```
> >> >
> >> > Signed-off-by: Jann Horn <jannh@google.com>
> >>
> >> Hm but with 7.0 we have sheaves everywhere including kmalloc caches, and
> >> there's a percpu rcu_free sheaf collecting kfree_rcu'd objects.
> >
> > Right, but only when CONFIG_KVFREE_RCU_BATCHED=y
> >
> >> Only when
> >> it's full it's submitted to call_rcu() where the callback rcu_free_sheaf()
> >> runs slab_free_hook() including kasan hooks etc. If there's nothing filling
> >> the rcu_free sheaf, the objects can sit there possibly indefinitely.
> >
> > Right.
> >
> >> That means CONFIG_KVFREE_RCU_BATCHED now handles only the rare cases where
> >> kfree_rcu() to the rcu_free sheaf fails (and I still owe it to Ulad to do
> >> something about this).
> >
> > Right.
> >
> >> So to complete the intent of this patch, we should perhaps also skip the
> >> rcu_free sheaf with RCU_STRICT_GRACE_PERIOD? (or with !KVFREE_RCU_BATCHED
> >> perhaps as it's also a form of batching).
> >
> > Maybe I'm missing something, but...
> >
> > by making KVFREE_RCU_BATCHED depend on !RCU_STRICT_GRACE_PERIOD,
> > selecting RCU_STRICT_GRACE_PERIOD disables all uses of rcu_free sheaves?
> >
> > kvfree_call_rcu() implementation on !KVFREE_RCU_BATCHED does not call
> > kfree_rcu_sheaf().
>
> Ah yeah, I missed that there are two kvfree_call_rcu() implementations and
> kfree_rcu_sheaf() is only used in the batched one. Sorry for the noise.
It's confusing indeed. I was trapped by this yesterday, thinking...
"Oh, why doesn't kvfree_rcu_barrier() on !KVFREE_RCU_BATCHED flush
rcu sheaves? It's broken!"
and then realized that I was confused :)
> Will queue the patch
Thanks!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-03-25 8:41 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-24 21:35 [PATCH] slab,rcu: disable KVFREE_RCU_BATCHED for strict grace period Jann Horn
2026-03-25 3:00 ` David Rientjes
2026-03-25 3:02 ` Joel Fernandes
2026-03-25 5:54 ` Harry Yoo (Oracle)
2026-03-25 7:50 ` Vlastimil Babka (SUSE)
2026-03-25 8:21 ` Harry Yoo (Oracle)
2026-03-25 8:34 ` Vlastimil Babka (SUSE)
2026-03-25 8:41 ` Harry Yoo (Oracle)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox