public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
       [not found] <CANP3RGeuRW53vukDy7WDO3FiVgu34-xVJYkfpm08oLO3odYFrA@mail.gmail.com>
@ 2026-01-13 19:15 ` Andrey Ryabinin
  2026-01-14 12:17   ` Maciej Wieczor-Retman
  2026-01-15  3:56   ` Andrey Konovalov
  0 siblings, 2 replies; 8+ messages in thread
From: Andrey Ryabinin @ 2026-01-13 19:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, kasan-dev, Uladzislau Rezki, linux-kernel,
	linux-mm, Andrey Ryabinin, joonki.min, stable

A KASAN warning can be triggered when vrealloc() changes the requested
size to a value that is not aligned to KASAN_GRANULE_SIZE.

    ------------[ cut here ]------------
    WARNING: CPU: 2 PID: 1 at mm/kasan/shadow.c:174 kasan_unpoison+0x40/0x48
    ...
    pc : kasan_unpoison+0x40/0x48
    lr : __kasan_unpoison_vmalloc+0x40/0x68
    Call trace:
     kasan_unpoison+0x40/0x48 (P)
     vrealloc_node_align_noprof+0x200/0x320
     bpf_patch_insn_data+0x90/0x2f0
     convert_ctx_accesses+0x8c0/0x1158
     bpf_check+0x1488/0x1900
     bpf_prog_load+0xd20/0x1258
     __sys_bpf+0x96c/0xdf0
     __arm64_sys_bpf+0x50/0xa0
     invoke_syscall+0x90/0x160

Introduce a dedicated kasan_vrealloc() helper that centralizes
KASAN handling for vmalloc reallocations. The helper accounts for KASAN
granule alignment when growing or shrinking an allocation and ensures
that partial granules are handled correctly.

Use this helper from vrealloc_node_align_noprof() to fix poisoning
logic.

Reported-by: Maciej Żenczykowski <maze@google.com>
Reported-by: <joonki.min@samsung-slsi.corp-partner.google.com>
Closes: https://lkml.kernel.org/r/CANP3RGeuRW53vukDy7WDO3FiVgu34-xVJYkfpm08oLO3odYFrA@mail.gmail.com
Fixes: d699440f58ce ("mm: fix vrealloc()'s KASAN poisoning logic")
Cc: stable@vger.kernel.org
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
---
 include/linux/kasan.h |  6 ++++++
 mm/kasan/shadow.c     | 24 ++++++++++++++++++++++++
 mm/vmalloc.c          |  7 ++-----
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 9c6ac4b62eb9..ff27712dd3c8 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -641,6 +641,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
 		__kasan_unpoison_vmap_areas(vms, nr_vms, flags);
 }
 
+void kasan_vrealloc(const void *start, unsigned long old_size,
+		unsigned long new_size);
+
 #else /* CONFIG_KASAN_VMALLOC */
 
 static inline void kasan_populate_early_vm_area_shadow(void *start,
@@ -670,6 +673,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
 			  kasan_vmalloc_flags_t flags)
 { }
 
+static inline void kasan_vrealloc(const void *start, unsigned long old_size,
+				unsigned long new_size) { }
+
 #endif /* CONFIG_KASAN_VMALLOC */
 
 #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 32fbdf759ea2..e9b6b2d8e651 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -651,6 +651,30 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
 	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
 }
 
+void kasan_vrealloc(const void *addr, unsigned long old_size,
+		unsigned long new_size)
+{
+	if (!kasan_enabled())
+		return;
+
+	if (new_size < old_size) {
+		kasan_poison_last_granule(addr, new_size);
+
+		new_size = round_up(new_size, KASAN_GRANULE_SIZE);
+		old_size = round_up(old_size, KASAN_GRANULE_SIZE);
+		if (new_size < old_size)
+			__kasan_poison_vmalloc(addr + new_size,
+					old_size - new_size);
+	} else if (new_size > old_size) {
+		old_size = round_down(old_size, KASAN_GRANULE_SIZE);
+		__kasan_unpoison_vmalloc(addr + old_size,
+					new_size - old_size,
+					KASAN_VMALLOC_PROT_NORMAL |
+					KASAN_VMALLOC_VM_ALLOC |
+					KASAN_VMALLOC_KEEP_TAG);
+	}
+}
+
 #else /* CONFIG_KASAN_VMALLOC */
 
 int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 41dd01e8430c..2536d34df058 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4322,7 +4322,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
 		if (want_init_on_free() || want_init_on_alloc(flags))
 			memset((void *)p + size, 0, old_size - size);
 		vm->requested_size = size;
-		kasan_poison_vmalloc(p + size, old_size - size);
+		kasan_vrealloc(p, old_size, size);
 		return (void *)p;
 	}
 
@@ -4330,16 +4330,13 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
 	 * We already have the bytes available in the allocation; use them.
 	 */
 	if (size <= alloced_size) {
-		kasan_unpoison_vmalloc(p + old_size, size - old_size,
-				       KASAN_VMALLOC_PROT_NORMAL |
-				       KASAN_VMALLOC_VM_ALLOC |
-				       KASAN_VMALLOC_KEEP_TAG);
 		/*
 		 * No need to zero memory here, as unused memory will have
 		 * already been zeroed at initial allocation time or during
 		 * realloc shrink time.
 		 */
 		vm->requested_size = size;
+		kasan_vrealloc(p, old_size, size);
 		return (void *)p;
 	}
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-13 19:15 ` [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc() Andrey Ryabinin
@ 2026-01-14 12:17   ` Maciej Wieczor-Retman
  2026-01-15  3:56   ` Andrey Konovalov
  1 sibling, 0 replies; 8+ messages in thread
From: Maciej Wieczor-Retman @ 2026-01-14 12:17 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Maciej Żenczykowski, Alexander Potapenko,
	Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable

Tested in generic and sw_tags modes. Compiles and runs okay with and without my
KASAN sw tags patches on x86. Kunit tests also seem fine.

Tested-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>

On 2026-01-13 at 20:15:15 +0100, Andrey Ryabinin wrote:
>A KASAN warning can be triggered when vrealloc() changes the requested
>size to a value that is not aligned to KASAN_GRANULE_SIZE.
>
>    ------------[ cut here ]------------
>    WARNING: CPU: 2 PID: 1 at mm/kasan/shadow.c:174 kasan_unpoison+0x40/0x48
>    ...
>    pc : kasan_unpoison+0x40/0x48
>    lr : __kasan_unpoison_vmalloc+0x40/0x68
>    Call trace:
>     kasan_unpoison+0x40/0x48 (P)
>     vrealloc_node_align_noprof+0x200/0x320
>     bpf_patch_insn_data+0x90/0x2f0
>     convert_ctx_accesses+0x8c0/0x1158
>     bpf_check+0x1488/0x1900
>     bpf_prog_load+0xd20/0x1258
>     __sys_bpf+0x96c/0xdf0
>     __arm64_sys_bpf+0x50/0xa0
>     invoke_syscall+0x90/0x160
>
>Introduce a dedicated kasan_vrealloc() helper that centralizes
>KASAN handling for vmalloc reallocations. The helper accounts for KASAN
>granule alignment when growing or shrinking an allocation and ensures
>that partial granules are handled correctly.
>
>Use this helper from vrealloc_node_align_noprof() to fix poisoning
>logic.
>
>Reported-by: Maciej Żenczykowski <maze@google.com>
>Reported-by: <joonki.min@samsung-slsi.corp-partner.google.com>
>Closes: https://lkml.kernel.org/r/CANP3RGeuRW53vukDy7WDO3FiVgu34-xVJYkfpm08oLO3odYFrA@mail.gmail.com
>Fixes: d699440f58ce ("mm: fix vrealloc()'s KASAN poisoning logic")
>Cc: stable@vger.kernel.org
>Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
>---
> include/linux/kasan.h |  6 ++++++
> mm/kasan/shadow.c     | 24 ++++++++++++++++++++++++
> mm/vmalloc.c          |  7 ++-----
> 3 files changed, 32 insertions(+), 5 deletions(-)
>
>diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>index 9c6ac4b62eb9..ff27712dd3c8 100644
>--- a/include/linux/kasan.h
>+++ b/include/linux/kasan.h
>@@ -641,6 +641,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
> 		__kasan_unpoison_vmap_areas(vms, nr_vms, flags);
> }
> 
>+void kasan_vrealloc(const void *start, unsigned long old_size,
>+		unsigned long new_size);
>+
> #else /* CONFIG_KASAN_VMALLOC */
> 
> static inline void kasan_populate_early_vm_area_shadow(void *start,
>@@ -670,6 +673,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
> 			  kasan_vmalloc_flags_t flags)
> { }
> 
>+static inline void kasan_vrealloc(const void *start, unsigned long old_size,
>+				unsigned long new_size) { }
>+
> #endif /* CONFIG_KASAN_VMALLOC */
> 
> #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
>diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
>index 32fbdf759ea2..e9b6b2d8e651 100644
>--- a/mm/kasan/shadow.c
>+++ b/mm/kasan/shadow.c
>@@ -651,6 +651,30 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
> 	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
> }
> 
>+void kasan_vrealloc(const void *addr, unsigned long old_size,
>+		unsigned long new_size)
>+{
>+	if (!kasan_enabled())
>+		return;
>+
>+	if (new_size < old_size) {
>+		kasan_poison_last_granule(addr, new_size);
>+
>+		new_size = round_up(new_size, KASAN_GRANULE_SIZE);
>+		old_size = round_up(old_size, KASAN_GRANULE_SIZE);
>+		if (new_size < old_size)
>+			__kasan_poison_vmalloc(addr + new_size,
>+					old_size - new_size);
>+	} else if (new_size > old_size) {
>+		old_size = round_down(old_size, KASAN_GRANULE_SIZE);
>+		__kasan_unpoison_vmalloc(addr + old_size,
>+					new_size - old_size,
>+					KASAN_VMALLOC_PROT_NORMAL |
>+					KASAN_VMALLOC_VM_ALLOC |
>+					KASAN_VMALLOC_KEEP_TAG);
>+	}
>+}
>+
> #else /* CONFIG_KASAN_VMALLOC */
> 
> int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
>diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>index 41dd01e8430c..2536d34df058 100644
>--- a/mm/vmalloc.c
>+++ b/mm/vmalloc.c
>@@ -4322,7 +4322,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
> 		if (want_init_on_free() || want_init_on_alloc(flags))
> 			memset((void *)p + size, 0, old_size - size);
> 		vm->requested_size = size;
>-		kasan_poison_vmalloc(p + size, old_size - size);
>+		kasan_vrealloc(p, old_size, size);
> 		return (void *)p;
> 	}
> 
>@@ -4330,16 +4330,13 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
> 	 * We already have the bytes available in the allocation; use them.
> 	 */
> 	if (size <= alloced_size) {
>-		kasan_unpoison_vmalloc(p + old_size, size - old_size,
>-				       KASAN_VMALLOC_PROT_NORMAL |
>-				       KASAN_VMALLOC_VM_ALLOC |
>-				       KASAN_VMALLOC_KEEP_TAG);
> 		/*
> 		 * No need to zero memory here, as unused memory will have
> 		 * already been zeroed at initial allocation time or during
> 		 * realloc shrink time.
> 		 */
> 		vm->requested_size = size;
>+		kasan_vrealloc(p, old_size, size);
> 		return (void *)p;
> 	}
> 
>-- 
>2.52.0
>

-- 
Kind regards
Maciej Wieczór-Retman


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-13 19:15 ` [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc() Andrey Ryabinin
  2026-01-14 12:17   ` Maciej Wieczor-Retman
@ 2026-01-15  3:56   ` Andrey Konovalov
  2026-01-16 13:26     ` Andrey Ryabinin
  1 sibling, 1 reply; 8+ messages in thread
From: Andrey Konovalov @ 2026-01-15  3:56 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable

On Tue, Jan 13, 2026 at 8:16 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
> A KASAN warning can be triggered when vrealloc() changes the requested
> size to a value that is not aligned to KASAN_GRANULE_SIZE.
>
>     ------------[ cut here ]------------
>     WARNING: CPU: 2 PID: 1 at mm/kasan/shadow.c:174 kasan_unpoison+0x40/0x48
>     ...
>     pc : kasan_unpoison+0x40/0x48
>     lr : __kasan_unpoison_vmalloc+0x40/0x68
>     Call trace:
>      kasan_unpoison+0x40/0x48 (P)
>      vrealloc_node_align_noprof+0x200/0x320
>      bpf_patch_insn_data+0x90/0x2f0
>      convert_ctx_accesses+0x8c0/0x1158
>      bpf_check+0x1488/0x1900
>      bpf_prog_load+0xd20/0x1258
>      __sys_bpf+0x96c/0xdf0
>      __arm64_sys_bpf+0x50/0xa0
>      invoke_syscall+0x90/0x160
>
> Introduce a dedicated kasan_vrealloc() helper that centralizes
> KASAN handling for vmalloc reallocations. The helper accounts for KASAN
> granule alignment when growing or shrinking an allocation and ensures
> that partial granules are handled correctly.
>
> Use this helper from vrealloc_node_align_noprof() to fix poisoning
> logic.
>
> Reported-by: Maciej Żenczykowski <maze@google.com>
> Reported-by: <joonki.min@samsung-slsi.corp-partner.google.com>
> Closes: https://lkml.kernel.org/r/CANP3RGeuRW53vukDy7WDO3FiVgu34-xVJYkfpm08oLO3odYFrA@mail.gmail.com
> Fixes: d699440f58ce ("mm: fix vrealloc()'s KASAN poisoning logic")
> Cc: stable@vger.kernel.org
> Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> ---
>  include/linux/kasan.h |  6 ++++++
>  mm/kasan/shadow.c     | 24 ++++++++++++++++++++++++
>  mm/vmalloc.c          |  7 ++-----
>  3 files changed, 32 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index 9c6ac4b62eb9..ff27712dd3c8 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -641,6 +641,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
>                 __kasan_unpoison_vmap_areas(vms, nr_vms, flags);
>  }
>
> +void kasan_vrealloc(const void *start, unsigned long old_size,
> +               unsigned long new_size);
> +
>  #else /* CONFIG_KASAN_VMALLOC */
>
>  static inline void kasan_populate_early_vm_area_shadow(void *start,
> @@ -670,6 +673,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
>                           kasan_vmalloc_flags_t flags)
>  { }
>
> +static inline void kasan_vrealloc(const void *start, unsigned long old_size,
> +                               unsigned long new_size) { }
> +
>  #endif /* CONFIG_KASAN_VMALLOC */
>
>  #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 32fbdf759ea2..e9b6b2d8e651 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -651,6 +651,30 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
>         kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
>  }
>
> +void kasan_vrealloc(const void *addr, unsigned long old_size,
> +               unsigned long new_size)
> +{
> +       if (!kasan_enabled())
> +               return;

Please move this check to include/linux/kasan.h and add
__kasan_vrealloc, similar to other hooks.

Otherwise, these kasan_enabled() checks eventually start creeping into
lower-level KASAN functions, and this makes the logic hard to follow.
We recently cleaned up most of these checks.

> +
> +       if (new_size < old_size) {
> +               kasan_poison_last_granule(addr, new_size);
> +
> +               new_size = round_up(new_size, KASAN_GRANULE_SIZE);
> +               old_size = round_up(old_size, KASAN_GRANULE_SIZE);
> +               if (new_size < old_size)
> +                       __kasan_poison_vmalloc(addr + new_size,
> +                                       old_size - new_size);
> +       } else if (new_size > old_size) {
> +               old_size = round_down(old_size, KASAN_GRANULE_SIZE);
> +               __kasan_unpoison_vmalloc(addr + old_size,
> +                                       new_size - old_size,
> +                                       KASAN_VMALLOC_PROT_NORMAL |
> +                                       KASAN_VMALLOC_VM_ALLOC |
> +                                       KASAN_VMALLOC_KEEP_TAG);
> +       }
> +}
> +
>  #else /* CONFIG_KASAN_VMALLOC */
>
>  int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask)
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 41dd01e8430c..2536d34df058 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4322,7 +4322,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
>                 if (want_init_on_free() || want_init_on_alloc(flags))
>                         memset((void *)p + size, 0, old_size - size);
>                 vm->requested_size = size;
> -               kasan_poison_vmalloc(p + size, old_size - size);
> +               kasan_vrealloc(p, old_size, size);
>                 return (void *)p;
>         }
>
> @@ -4330,16 +4330,13 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
>          * We already have the bytes available in the allocation; use them.
>          */
>         if (size <= alloced_size) {
> -               kasan_unpoison_vmalloc(p + old_size, size - old_size,
> -                                      KASAN_VMALLOC_PROT_NORMAL |
> -                                      KASAN_VMALLOC_VM_ALLOC |
> -                                      KASAN_VMALLOC_KEEP_TAG);
>                 /*
>                  * No need to zero memory here, as unused memory will have
>                  * already been zeroed at initial allocation time or during
>                  * realloc shrink time.
>                  */
>                 vm->requested_size = size;
> +               kasan_vrealloc(p, old_size, size);
>                 return (void *)p;
>         }
>
> --
> 2.52.0
>

With the change mentioned above:

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

Thank you!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-15  3:56   ` Andrey Konovalov
@ 2026-01-16 13:26     ` Andrey Ryabinin
  2026-01-17  1:16       ` Andrey Konovalov
  0 siblings, 1 reply; 8+ messages in thread
From: Andrey Ryabinin @ 2026-01-16 13:26 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrew Morton, Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable

On 1/15/26 4:56 AM, Andrey Konovalov wrote:
> On Tue, Jan 13, 2026 at 8:16 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:

>> ---
>>  include/linux/kasan.h |  6 ++++++
>>  mm/kasan/shadow.c     | 24 ++++++++++++++++++++++++
>>  mm/vmalloc.c          |  7 ++-----
>>  3 files changed, 32 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
>> index 9c6ac4b62eb9..ff27712dd3c8 100644
>> --- a/include/linux/kasan.h
>> +++ b/include/linux/kasan.h
>> @@ -641,6 +641,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
>>                 __kasan_unpoison_vmap_areas(vms, nr_vms, flags);
>>  }
>>
>> +void kasan_vrealloc(const void *start, unsigned long old_size,
>> +               unsigned long new_size);
>> +
>>  #else /* CONFIG_KASAN_VMALLOC */
>>
>>  static inline void kasan_populate_early_vm_area_shadow(void *start,
>> @@ -670,6 +673,9 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
>>                           kasan_vmalloc_flags_t flags)
>>  { }
>>
>> +static inline void kasan_vrealloc(const void *start, unsigned long old_size,
>> +                               unsigned long new_size) { }
>> +
>>  #endif /* CONFIG_KASAN_VMALLOC */
>>
>>  #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
>> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
>> index 32fbdf759ea2..e9b6b2d8e651 100644
>> --- a/mm/kasan/shadow.c
>> +++ b/mm/kasan/shadow.c
>> @@ -651,6 +651,30 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
>>         kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
>>  }
>>
>> +void kasan_vrealloc(const void *addr, unsigned long old_size,
>> +               unsigned long new_size)
>> +{
>> +       if (!kasan_enabled())
>> +               return;
> 
> Please move this check to include/linux/kasan.h and add
> __kasan_vrealloc, similar to other hooks.
> 
> Otherwise, these kasan_enabled() checks eventually start creeping into
> lower-level KASAN functions, and this makes the logic hard to follow.
> We recently cleaned up most of these checks.
> 

So something like bellow I guess.
I think this would actually have the opposite effect and make the code harder to follow.
Introducing an extra wrapper adds another layer of indirection and more boilerplate, which
makes the control flow less obvious and the code harder to navigate and grep.

And what's the benefit here? I don't clearly see it.

---
 include/linux/kasan.h | 10 +++++++++-
 mm/kasan/shadow.c     |  5 +----
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index ff27712dd3c8..338a1921a50a 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -641,9 +641,17 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
 		__kasan_unpoison_vmap_areas(vms, nr_vms, flags);
 }
 
-void kasan_vrealloc(const void *start, unsigned long old_size,
+void __kasan_vrealloc(const void *start, unsigned long old_size,
 		unsigned long new_size);
 
+static __always_inline void kasan_vrealloc(const void *start,
+					unsigned long old_size,
+					unsigned long new_size)
+{
+	if (kasan_enabled())
+		__kasan_vrealloc(start, old_size, new_size);
+}
+
 #else /* CONFIG_KASAN_VMALLOC */
 
 static inline void kasan_populate_early_vm_area_shadow(void *start,
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index e9b6b2d8e651..29b0d0d38b40 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -651,12 +651,9 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
 	kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
 }
 
-void kasan_vrealloc(const void *addr, unsigned long old_size,
+void __kasan_vrealloc(const void *addr, unsigned long old_size,
 		unsigned long new_size)
 {
-	if (!kasan_enabled())
-		return;
-
 	if (new_size < old_size) {
 		kasan_poison_last_granule(addr, new_size);
 
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-16 13:26     ` Andrey Ryabinin
@ 2026-01-17  1:16       ` Andrey Konovalov
  2026-01-17 17:08         ` Andrey Konovalov
  0 siblings, 1 reply; 8+ messages in thread
From: Andrey Konovalov @ 2026-01-17  1:16 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable

On Fri, Jan 16, 2026 at 2:26 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>
> So something like bellow I guess.

Yeah, looks good.

> I think this would actually have the opposite effect and make the code harder to follow.
> Introducing an extra wrapper adds another layer of indirection and more boilerplate, which
> makes the control flow less obvious and the code harder to navigate and grep.
>
> And what's the benefit here? I don't clearly see it.

One functional benefit is when HW_TAGS mode enabled in .config but
disabled via command-line, we avoid a function call into KASAN
runtime.

From the readability perspective, what we had before the recent
clean-up was an assortment of kasan_enabled/kasan_arch_ready checks in
lower-level KASAN functions, which made it hard to figure out what
actually happens when KASAN is not enabled. And these high-level
checks make it more clear. At least in my opinion.


>
> ---
>  include/linux/kasan.h | 10 +++++++++-
>  mm/kasan/shadow.c     |  5 +----
>  2 files changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index ff27712dd3c8..338a1921a50a 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -641,9 +641,17 @@ kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
>                 __kasan_unpoison_vmap_areas(vms, nr_vms, flags);
>  }
>
> -void kasan_vrealloc(const void *start, unsigned long old_size,
> +void __kasan_vrealloc(const void *start, unsigned long old_size,
>                 unsigned long new_size);
>
> +static __always_inline void kasan_vrealloc(const void *start,
> +                                       unsigned long old_size,
> +                                       unsigned long new_size)
> +{
> +       if (kasan_enabled())
> +               __kasan_vrealloc(start, old_size, new_size);
> +}
> +
>  #else /* CONFIG_KASAN_VMALLOC */
>
>  static inline void kasan_populate_early_vm_area_shadow(void *start,
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index e9b6b2d8e651..29b0d0d38b40 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -651,12 +651,9 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
>         kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
>  }
>
> -void kasan_vrealloc(const void *addr, unsigned long old_size,
> +void __kasan_vrealloc(const void *addr, unsigned long old_size,
>                 unsigned long new_size)
>  {
> -       if (!kasan_enabled())
> -               return;
> -
>         if (new_size < old_size) {
>                 kasan_poison_last_granule(addr, new_size);
>
> --
> 2.52.0
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-17  1:16       ` Andrey Konovalov
@ 2026-01-17 17:08         ` Andrey Konovalov
  2026-01-19  0:48           ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: Andrey Konovalov @ 2026-01-17 17:08 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Andrew Morton, Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable

On Sat, Jan 17, 2026 at 2:16 AM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Fri, Jan 16, 2026 at 2:26 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> >
> > So something like bellow I guess.
>
> Yeah, looks good.
>
> > I think this would actually have the opposite effect and make the code harder to follow.
> > Introducing an extra wrapper adds another layer of indirection and more boilerplate, which
> > makes the control flow less obvious and the code harder to navigate and grep.
> >
> > And what's the benefit here? I don't clearly see it.
>
> One functional benefit is when HW_TAGS mode enabled in .config but
> disabled via command-line, we avoid a function call into KASAN
> runtime.

Ah, and I just realized than kasan_vrealloc should go into common.c -
we also need it for HW_TAGS.


>
> From the readability perspective, what we had before the recent
> clean-up was an assortment of kasan_enabled/kasan_arch_ready checks in
> lower-level KASAN functions, which made it hard to figure out what
> actually happens when KASAN is not enabled. And these high-level
> checks make it more clear. At least in my opinion.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-17 17:08         ` Andrey Konovalov
@ 2026-01-19  0:48           ` Andrew Morton
  2026-01-19 14:43             ` Andrey Ryabinin
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2026-01-19  0:48 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: Andrey Ryabinin, Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable

On Sat, 17 Jan 2026 18:08:36 +0100 Andrey Konovalov <andreyknvl@gmail.com> wrote:

> On Sat, Jan 17, 2026 at 2:16 AM Andrey Konovalov <andreyknvl@gmail.com> wrote:
> >
> > On Fri, Jan 16, 2026 at 2:26 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
> > >
> > > So something like bellow I guess.
> >
> > Yeah, looks good.
> >
> > > I think this would actually have the opposite effect and make the code harder to follow.
> > > Introducing an extra wrapper adds another layer of indirection and more boilerplate, which
> > > makes the control flow less obvious and the code harder to navigate and grep.
> > >
> > > And what's the benefit here? I don't clearly see it.
> >
> > One functional benefit is when HW_TAGS mode enabled in .config but
> > disabled via command-line, we avoid a function call into KASAN
> > runtime.
> 
> Ah, and I just realized than kasan_vrealloc should go into common.c -
> we also need it for HW_TAGS.

I think I'll send this cc:stable bugfix upstream as-is.

Can people please add these nice-to-have code-motion cleanup items to
their todo lists, to be attended to in the usual fashion?


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc()
  2026-01-19  0:48           ` Andrew Morton
@ 2026-01-19 14:43             ` Andrey Ryabinin
  0 siblings, 0 replies; 8+ messages in thread
From: Andrey Ryabinin @ 2026-01-19 14:43 UTC (permalink / raw)
  To: Andrew Morton, Andrey Konovalov
  Cc: Maciej Żenczykowski, Maciej Wieczor-Retman,
	Alexander Potapenko, Dmitry Vyukov, Vincenzo Frascino, kasan-dev,
	Uladzislau Rezki, linux-kernel, linux-mm, joonki.min, stable



On 1/19/26 1:48 AM, Andrew Morton wrote:
> On Sat, 17 Jan 2026 18:08:36 +0100 Andrey Konovalov <andreyknvl@gmail.com> wrote:
> 
>> On Sat, Jan 17, 2026 at 2:16 AM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>>>
>>> On Fri, Jan 16, 2026 at 2:26 PM Andrey Ryabinin <ryabinin.a.a@gmail.com> wrote:
>>>>
>>>> So something like bellow I guess.
>>>
>>> Yeah, looks good.
>>>
>>>> I think this would actually have the opposite effect and make the code harder to follow.
>>>> Introducing an extra wrapper adds another layer of indirection and more boilerplate, which
>>>> makes the control flow less obvious and the code harder to navigate and grep.
>>>>
>>>> And what's the benefit here? I don't clearly see it.
>>>
>>> One functional benefit is when HW_TAGS mode enabled in .config but
>>> disabled via command-line, we avoid a function call into KASAN
>>> runtime.
>>
>> Ah, and I just realized than kasan_vrealloc should go into common.c -
>> we also need it for HW_TAGS.
> 
> I think I'll send this cc:stable bugfix upstream as-is.
> 

Please, include follow-up fix before sending. We have to move kasan_vrealloc()
to common.c as shadow.c is not compiled for CONFIG_KASAN_HW_TAGS=y.
So without the fixup, CONFIG_KASAN_HW_TAGS=y will become broken.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-01-19 14:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CANP3RGeuRW53vukDy7WDO3FiVgu34-xVJYkfpm08oLO3odYFrA@mail.gmail.com>
2026-01-13 19:15 ` [PATCH 1/2] mm/kasan: Fix KASAN poisoning in vrealloc() Andrey Ryabinin
2026-01-14 12:17   ` Maciej Wieczor-Retman
2026-01-15  3:56   ` Andrey Konovalov
2026-01-16 13:26     ` Andrey Ryabinin
2026-01-17  1:16       ` Andrey Konovalov
2026-01-17 17:08         ` Andrey Konovalov
2026-01-19  0:48           ` Andrew Morton
2026-01-19 14:43             ` Andrey Ryabinin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox