* [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align [not found] <20260105080230.13171-1-harry.yoo@oracle.com> @ 2026-01-05 8:02 ` Harry Yoo 2026-01-07 11:43 ` Vlastimil Babka 2026-01-08 11:39 ` Alexander Potapenko 0 siblings, 2 replies; 7+ messages in thread From: Harry Yoo @ 2026-01-05 8:02 UTC (permalink / raw) To: akpm, vbabka Cc: andreyknvl, cl, dvyukov, glider, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, harry.yoo, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable When both KASAN and SLAB_STORE_USER are enabled, accesses to struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. This occurs because orig_size is currently defined as unsigned int, which only guarantees 4-byte alignment. When struct kasan_alloc_meta is placed after orig_size, it may end up at a 4-byte boundary rather than the required 8-byte boundary on 64-bit systems. Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS are assumed to require 64-bit accesses to be 64-bit aligned. See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. Change orig_size from unsigned int to unsigned long to ensure proper alignment for any subsequent metadata. This should not waste additional memory because kmalloc objects are already aligned to at least ARCH_KMALLOC_MINALIGN. Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: stable@vger.kernel.org Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") Signed-off-by: Harry Yoo <harry.yoo@oracle.com> --- mm/slub.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index ad71f01571f0..1c747435a6ab 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -857,7 +857,7 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab, * request size in the meta data area, for better debug and sanity check. */ static inline void set_orig_size(struct kmem_cache *s, - void *object, unsigned int orig_size) + void *object, unsigned long orig_size) { void *p = kasan_reset_tag(object); @@ -867,10 +867,10 @@ static inline void set_orig_size(struct kmem_cache *s, p += get_info_end(s); p += sizeof(struct track) * 2; - *(unsigned int *)p = orig_size; + *(unsigned long *)p = orig_size; } -static inline unsigned int get_orig_size(struct kmem_cache *s, void *object) +static inline unsigned long get_orig_size(struct kmem_cache *s, void *object) { void *p = kasan_reset_tag(object); @@ -883,7 +883,7 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object) p += get_info_end(s); p += sizeof(struct track) * 2; - return *(unsigned int *)p; + return *(unsigned long *)p; } #ifdef CONFIG_SLUB_DEBUG @@ -1198,7 +1198,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p) off += 2 * sizeof(struct track); if (slub_debug_orig_size(s)) - off += sizeof(unsigned int); + off += sizeof(unsigned long); off += kasan_metadata_size(s, false); @@ -1394,7 +1394,7 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p) off += 2 * sizeof(struct track); if (s->flags & SLAB_KMALLOC) - off += sizeof(unsigned int); + off += sizeof(unsigned long); } off += kasan_metadata_size(s, false); @@ -7949,7 +7949,7 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s) /* Save the original kmalloc request size */ if (flags & SLAB_KMALLOC) - size += sizeof(unsigned int); + size += sizeof(unsigned long); } #endif -- 2.43.0 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align 2026-01-05 8:02 ` [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align Harry Yoo @ 2026-01-07 11:43 ` Vlastimil Babka 2026-01-08 7:12 ` Harry Yoo 2026-01-08 11:39 ` Alexander Potapenko 1 sibling, 1 reply; 7+ messages in thread From: Vlastimil Babka @ 2026-01-07 11:43 UTC (permalink / raw) To: Harry Yoo, akpm Cc: andreyknvl, cl, dvyukov, glider, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable On 1/5/26 09:02, Harry Yoo wrote: > When both KASAN and SLAB_STORE_USER are enabled, accesses to > struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. > This occurs because orig_size is currently defined as unsigned int, > which only guarantees 4-byte alignment. When struct kasan_alloc_meta is > placed after orig_size, it may end up at a 4-byte boundary rather than > the required 8-byte boundary on 64-bit systems. Oops. > Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS > are assumed to require 64-bit accesses to be 64-bit aligned. > See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: > "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. > > Change orig_size from unsigned int to unsigned long to ensure proper > alignment for any subsequent metadata. This should not waste additional > memory because kmalloc objects are already aligned to at least > ARCH_KMALLOC_MINALIGN. I'll add: Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/ since that's useful context and discussion. > Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> > Cc: stable@vger.kernel.org > Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") > Signed-off-by: Harry Yoo <harry.yoo@oracle.com> As the problem was introduced in 6.1, doesn't seem urgent to push as 6.19 rc fix, so keeping it as part of the series (where it's a necessary prerequisity per the Closes: link above) and stable backporting later seems indeed sufficient. Thanks. ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align 2026-01-07 11:43 ` Vlastimil Babka @ 2026-01-08 7:12 ` Harry Yoo 0 siblings, 0 replies; 7+ messages in thread From: Harry Yoo @ 2026-01-08 7:12 UTC (permalink / raw) To: Vlastimil Babka Cc: akpm, andreyknvl, cl, dvyukov, glider, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable On Wed, Jan 07, 2026 at 12:43:17PM +0100, Vlastimil Babka wrote: > On 1/5/26 09:02, Harry Yoo wrote: > > When both KASAN and SLAB_STORE_USER are enabled, accesses to > > struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. > > This occurs because orig_size is currently defined as unsigned int, > > which only guarantees 4-byte alignment. When struct kasan_alloc_meta is > > placed after orig_size, it may end up at a 4-byte boundary rather than > > the required 8-byte boundary on 64-bit systems. > > Oops. > > > Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS > > are assumed to require 64-bit accesses to be 64-bit aligned. > > See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: > > "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. > > > > Change orig_size from unsigned int to unsigned long to ensure proper > > alignment for any subsequent metadata. This should not waste additional > > memory because kmalloc objects are already aligned to at least > > ARCH_KMALLOC_MINALIGN. > > I'll add: > > Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/ > > since that's useful context and discussion. Looks good to me. > > Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> > > Cc: stable@vger.kernel.org > > Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") > > Signed-off-by: Harry Yoo <harry.yoo@oracle.com> > > As the problem was introduced in 6.1, doesn't seem urgent to push as 6.19 rc > fix, so keeping it as part of the series Yeah, no need to hurry. > (where it's a necessary prerequisity per the Closes: link above) Technically it's not a necessary prerequisite anymore because the series doesn't unpoison slabobj_ext anymore, but later patches depend on it because of the change in object layout. > and stable backporting later seems indeed sufficient. Thanks. backporting later sounds reasonable. Thanks! -- Cheers, Harry / Hyeonggon ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align 2026-01-05 8:02 ` [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align Harry Yoo 2026-01-07 11:43 ` Vlastimil Babka @ 2026-01-08 11:39 ` Alexander Potapenko 2026-01-09 1:52 ` Harry Yoo 1 sibling, 1 reply; 7+ messages in thread From: Alexander Potapenko @ 2026-01-08 11:39 UTC (permalink / raw) To: Harry Yoo Cc: akpm, vbabka, andreyknvl, cl, dvyukov, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable On Mon, Jan 5, 2026 at 9:02 AM Harry Yoo <harry.yoo@oracle.com> wrote: > > When both KASAN and SLAB_STORE_USER are enabled, accesses to > struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. > This occurs because orig_size is currently defined as unsigned int, > which only guarantees 4-byte alignment. When struct kasan_alloc_meta is > placed after orig_size, it may end up at a 4-byte boundary rather than > the required 8-byte boundary on 64-bit systems. > > Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS > are assumed to require 64-bit accesses to be 64-bit aligned. > See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: > "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. > > Change orig_size from unsigned int to unsigned long to ensure proper > alignment for any subsequent metadata. This should not waste additional > memory because kmalloc objects are already aligned to at least > ARCH_KMALLOC_MINALIGN. > > Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> > Cc: stable@vger.kernel.org > Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") > Signed-off-by: Harry Yoo <harry.yoo@oracle.com> > --- > mm/slub.c | 14 +++++++------- > 1 file changed, 7 insertions(+), 7 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index ad71f01571f0..1c747435a6ab 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -857,7 +857,7 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab, > * request size in the meta data area, for better debug and sanity check. > */ > static inline void set_orig_size(struct kmem_cache *s, > - void *object, unsigned int orig_size) > + void *object, unsigned long orig_size) > { > void *p = kasan_reset_tag(object); > > @@ -867,10 +867,10 @@ static inline void set_orig_size(struct kmem_cache *s, > p += get_info_end(s); > p += sizeof(struct track) * 2; > > - *(unsigned int *)p = orig_size; > + *(unsigned long *)p = orig_size; Instead of calculating the offset of the original size in several places, should we maybe introduce a function that returns a pointer to it? ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align 2026-01-08 11:39 ` Alexander Potapenko @ 2026-01-09 1:52 ` Harry Yoo 2026-01-09 9:30 ` Alexander Potapenko 0 siblings, 1 reply; 7+ messages in thread From: Harry Yoo @ 2026-01-09 1:52 UTC (permalink / raw) To: Alexander Potapenko Cc: akpm, vbabka, andreyknvl, cl, dvyukov, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable On Thu, Jan 08, 2026 at 12:39:22PM +0100, Alexander Potapenko wrote: > On Mon, Jan 5, 2026 at 9:02 AM Harry Yoo <harry.yoo@oracle.com> wrote: > > > > When both KASAN and SLAB_STORE_USER are enabled, accesses to > > struct kasan_alloc_meta fields can be misaligned on 64-bit architectures. > > This occurs because orig_size is currently defined as unsigned int, > > which only guarantees 4-byte alignment. When struct kasan_alloc_meta is > > placed after orig_size, it may end up at a 4-byte boundary rather than > > the required 8-byte boundary on 64-bit systems. > > > > Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS > > are assumed to require 64-bit accesses to be 64-bit aligned. > > See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert: > > "ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details. > > > > Change orig_size from unsigned int to unsigned long to ensure proper > > alignment for any subsequent metadata. This should not waste additional > > memory because kmalloc objects are already aligned to at least > > ARCH_KMALLOC_MINALIGN. > > > > Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> > > Cc: stable@vger.kernel.org > > Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc") > > Signed-off-by: Harry Yoo <harry.yoo@oracle.com> > > --- > > mm/slub.c | 14 +++++++------- > > 1 file changed, 7 insertions(+), 7 deletions(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index ad71f01571f0..1c747435a6ab 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -857,7 +857,7 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab, > > * request size in the meta data area, for better debug and sanity check. > > */ > > static inline void set_orig_size(struct kmem_cache *s, > > - void *object, unsigned int orig_size) > > + void *object, unsigned long orig_size) > > { > > void *p = kasan_reset_tag(object); > > > > @@ -867,10 +867,10 @@ static inline void set_orig_size(struct kmem_cache *s, > > p += get_info_end(s); > > p += sizeof(struct track) * 2; > > > > - *(unsigned int *)p = orig_size; > > + *(unsigned long *)p = orig_size; > > Instead of calculating the offset of the original size in several > places, should we maybe introduce a function that returns a pointer to > it? Good point. The calculation of various metadata offset (including the original size) is repeated in several places, and perhaps it's worth cleaning up, something like this: enum { FREE_POINTER_OFFSET, ALLOC_TRACK_OFFSET, FREE_TRACK_OFFSET, ORIG_SIZE_OFFSET, KASAN_ALLOC_META_OFFSET, OBJ_EXT_OFFSET, FINAL_ALIGNMENT_PADDING_OFFSET, ... }; orig_size = *(unsigned long *)get_metadata_ptr(p, ORIG_SIZE_OFFSET); ... of course, perhaps as a follow-up rather than as part of this series. -- Cheers, Harry / Hyeonggon ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align 2026-01-09 1:52 ` Harry Yoo @ 2026-01-09 9:30 ` Alexander Potapenko 2026-01-12 6:28 ` Harry Yoo 0 siblings, 1 reply; 7+ messages in thread From: Alexander Potapenko @ 2026-01-09 9:30 UTC (permalink / raw) To: Harry Yoo Cc: akpm, vbabka, andreyknvl, cl, dvyukov, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable > > Instead of calculating the offset of the original size in several > > places, should we maybe introduce a function that returns a pointer to > > it? > > Good point. > > The calculation of various metadata offset (including the original size) > is repeated in several places, and perhaps it's worth cleaning up, > something like this: > > enum { > FREE_POINTER_OFFSET, > ALLOC_TRACK_OFFSET, > FREE_TRACK_OFFSET, > ORIG_SIZE_OFFSET, > KASAN_ALLOC_META_OFFSET, > OBJ_EXT_OFFSET, > FINAL_ALIGNMENT_PADDING_OFFSET, > ... > }; > > orig_size = *(unsigned long *)get_metadata_ptr(p, ORIG_SIZE_OFFSET); An alternative would be to declare a struct containing all the metadata fields and use offsetof() (or simply do a cast and access the fields via the struct pointer) ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align 2026-01-09 9:30 ` Alexander Potapenko @ 2026-01-12 6:28 ` Harry Yoo 0 siblings, 0 replies; 7+ messages in thread From: Harry Yoo @ 2026-01-12 6:28 UTC (permalink / raw) To: Alexander Potapenko Cc: akpm, vbabka, andreyknvl, cl, dvyukov, hannes, linux-mm, mhocko, muchun.song, rientjes, roman.gushchin, ryabinin.a.a, shakeel.butt, surenb, vincenzo.frascino, yeoreum.yun, tytso, adilger.kernel, linux-ext4, linux-kernel, cgroups, hao.li, stable On Fri, Jan 09, 2026 at 10:30:47AM +0100, Alexander Potapenko wrote: > > > Instead of calculating the offset of the original size in several > > > places, should we maybe introduce a function that returns a pointer to > > > it? > > > > Good point. > > > > The calculation of various metadata offset (including the original size) > > is repeated in several places, and perhaps it's worth cleaning up, > > something like this: > > > > enum { > > FREE_POINTER_OFFSET, > > ALLOC_TRACK_OFFSET, > > FREE_TRACK_OFFSET, > > ORIG_SIZE_OFFSET, > > KASAN_ALLOC_META_OFFSET, > > OBJ_EXT_OFFSET, > > FINAL_ALIGNMENT_PADDING_OFFSET, > > ... > > }; > > > > orig_size = *(unsigned long *)get_metadata_ptr(p, ORIG_SIZE_OFFSET); > > An alternative would be to declare a struct containing all the > metadata fields and use offsetof() (or simply do a cast and access the > fields via the struct pointer) But considering that a cache may enable only a subset of those debugging features, I'm not sure determining that offset for all caches at build time is possible. -- Cheers, Harry / Hyeonggon ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-01-12 6:29 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260105080230.13171-1-harry.yoo@oracle.com>
2026-01-05 8:02 ` [PATCH V5 1/8] mm/slab: use unsigned long for orig_size to ensure proper metadata align Harry Yoo
2026-01-07 11:43 ` Vlastimil Babka
2026-01-08 7:12 ` Harry Yoo
2026-01-08 11:39 ` Alexander Potapenko
2026-01-09 1:52 ` Harry Yoo
2026-01-09 9:30 ` Alexander Potapenko
2026-01-12 6:28 ` Harry Yoo
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox