* [PATCH v4 0/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() @ 2025-03-29 0:03 Peter Collingbourne 2025-03-29 0:03 ` [PATCH v4 1/2] " Peter Collingbourne 2025-03-29 0:03 ` [PATCH v4 2/2] kasan: Add strscpy() test to trigger tag fault on arm64 Peter Collingbourne 0 siblings, 2 replies; 7+ messages in thread From: Peter Collingbourne @ 2025-03-29 0:03 UTC (permalink / raw) To: Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Catalin Marinas, Mark Rutland Cc: Peter Collingbourne, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel This series fixes an issue where strscpy() would sometimes trigger a false positive KASAN report with MTE. v4: - clarify commit message - improve comment v3: - simplify test case Peter Collingbourne (1): string: Add load_unaligned_zeropad() code path to sized_strscpy() Vincenzo Frascino (1): kasan: Add strscpy() test to trigger tag fault on arm64 lib/string.c | 13 ++++++++++--- mm/kasan/kasan_test_c.c | 16 ++++++++++++++++ 2 files changed, 26 insertions(+), 3 deletions(-) -- 2.49.0.472.ge94155a9ec-goog ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v4 1/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() 2025-03-29 0:03 [PATCH v4 0/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() Peter Collingbourne @ 2025-03-29 0:03 ` Peter Collingbourne 2025-04-02 20:10 ` Catalin Marinas 2025-03-29 0:03 ` [PATCH v4 2/2] kasan: Add strscpy() test to trigger tag fault on arm64 Peter Collingbourne 1 sibling, 1 reply; 7+ messages in thread From: Peter Collingbourne @ 2025-03-29 0:03 UTC (permalink / raw) To: Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Catalin Marinas, Mark Rutland Cc: Peter Collingbourne, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel, stable The call to read_word_at_a_time() in sized_strscpy() is problematic with MTE because it may trigger a tag check fault when reading across a tag granule (16 bytes) boundary. To make this code MTE compatible, let's start using load_unaligned_zeropad() on architectures where it is available (i.e. architectures that define CONFIG_DCACHE_WORD_ACCESS). Because load_unaligned_zeropad() takes care of page boundaries as well as tag granule boundaries, also disable the code preventing crossing page boundaries when using load_unaligned_zeropad(). Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/If4b22e43b5a4ca49726b4bf98ada827fdf755548 Fixes: 94ab5b61ee16 ("kasan, arm64: enable CONFIG_KASAN_HW_TAGS") Cc: stable@vger.kernel.org --- v2: - new approach lib/string.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/lib/string.c b/lib/string.c index eb4486ed40d25..b632c71df1a50 100644 --- a/lib/string.c +++ b/lib/string.c @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) return -E2BIG; +#ifndef CONFIG_DCACHE_WORD_ACCESS #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS /* * If src is unaligned, don't cross a page boundary, @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) /* If src or dest is unaligned, don't do word-at-a-time. */ if (((long) dest | (long) src) & (sizeof(long) - 1)) max = 0; +#endif #endif /* - * read_word_at_a_time() below may read uninitialized bytes after the - * trailing zero and use them in comparisons. Disable this optimization - * under KMSAN to prevent false positive reports. + * load_unaligned_zeropad() or read_word_at_a_time() below may read + * uninitialized bytes after the trailing zero and use them in + * comparisons. Disable this optimization under KMSAN to prevent + * false positive reports. */ if (IS_ENABLED(CONFIG_KMSAN)) max = 0; @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) while (max >= sizeof(unsigned long)) { unsigned long c, data; +#ifdef CONFIG_DCACHE_WORD_ACCESS + c = load_unaligned_zeropad(src+res); +#else c = read_word_at_a_time(src+res); +#endif if (has_zero(c, &data, &constants)) { data = prep_zero_mask(c, data, &constants); data = create_zero_mask(data); -- 2.49.0.472.ge94155a9ec-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() 2025-03-29 0:03 ` [PATCH v4 1/2] " Peter Collingbourne @ 2025-04-02 20:10 ` Catalin Marinas 2025-04-03 0:08 ` Peter Collingbourne 0 siblings, 1 reply; 7+ messages in thread From: Catalin Marinas @ 2025-04-02 20:10 UTC (permalink / raw) To: Peter Collingbourne Cc: Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Mark Rutland, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel, stable On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote: > diff --git a/lib/string.c b/lib/string.c > index eb4486ed40d25..b632c71df1a50 100644 > --- a/lib/string.c > +++ b/lib/string.c > @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) > return -E2BIG; > > +#ifndef CONFIG_DCACHE_WORD_ACCESS > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > /* > * If src is unaligned, don't cross a page boundary, > @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > /* If src or dest is unaligned, don't do word-at-a-time. */ > if (((long) dest | (long) src) & (sizeof(long) - 1)) > max = 0; > +#endif > #endif > > /* > - * read_word_at_a_time() below may read uninitialized bytes after the > - * trailing zero and use them in comparisons. Disable this optimization > - * under KMSAN to prevent false positive reports. > + * load_unaligned_zeropad() or read_word_at_a_time() below may read > + * uninitialized bytes after the trailing zero and use them in > + * comparisons. Disable this optimization under KMSAN to prevent > + * false positive reports. > */ > if (IS_ENABLED(CONFIG_KMSAN)) > max = 0; > @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > while (max >= sizeof(unsigned long)) { > unsigned long c, data; > > +#ifdef CONFIG_DCACHE_WORD_ACCESS > + c = load_unaligned_zeropad(src+res); > +#else > c = read_word_at_a_time(src+res); > +#endif > if (has_zero(c, &data, &constants)) { > data = prep_zero_mask(c, data, &constants); > data = create_zero_mask(data); Kees mentioned the scenario where this crosses the page boundary and we pad the source with zeros. It's probably fine but there are 70+ cases where the strscpy() return value is checked, I only looked at a couple. Could we at least preserve the behaviour with regards to page boundaries and keep the existing 'max' limiting logic? If I read the code correctly, a fall back to reading one byte at a time from an unmapped page would panic. We also get this behaviour if src[0] is reading from an invalid address, though for arm64 the panic would be in ex_handler_load_unaligned_zeropad() when count >= 8. Reading across tag granule (but not across page boundary) and causing a tag check fault would result in padding but we can live with this and only architectures that do MTE-style tag checking would get the new behaviour. What I haven't checked is whether a tag check fault in ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for MTE (it would be a second tag check fault while processing the first). At a quick look, it seems ok but it might be worth checking. -- Catalin ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() 2025-04-02 20:10 ` Catalin Marinas @ 2025-04-03 0:08 ` Peter Collingbourne 2025-04-03 9:46 ` Catalin Marinas 2025-04-03 21:15 ` David Laight 0 siblings, 2 replies; 7+ messages in thread From: Peter Collingbourne @ 2025-04-03 0:08 UTC (permalink / raw) To: Catalin Marinas Cc: Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Mark Rutland, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel, stable On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote: > > diff --git a/lib/string.c b/lib/string.c > > index eb4486ed40d25..b632c71df1a50 100644 > > --- a/lib/string.c > > +++ b/lib/string.c > > @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) > > return -E2BIG; > > > > +#ifndef CONFIG_DCACHE_WORD_ACCESS > > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > > /* > > * If src is unaligned, don't cross a page boundary, > > @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > /* If src or dest is unaligned, don't do word-at-a-time. */ > > if (((long) dest | (long) src) & (sizeof(long) - 1)) > > max = 0; > > +#endif > > #endif > > > > /* > > - * read_word_at_a_time() below may read uninitialized bytes after the > > - * trailing zero and use them in comparisons. Disable this optimization > > - * under KMSAN to prevent false positive reports. > > + * load_unaligned_zeropad() or read_word_at_a_time() below may read > > + * uninitialized bytes after the trailing zero and use them in > > + * comparisons. Disable this optimization under KMSAN to prevent > > + * false positive reports. > > */ > > if (IS_ENABLED(CONFIG_KMSAN)) > > max = 0; > > @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > while (max >= sizeof(unsigned long)) { > > unsigned long c, data; > > > > +#ifdef CONFIG_DCACHE_WORD_ACCESS > > + c = load_unaligned_zeropad(src+res); > > +#else > > c = read_word_at_a_time(src+res); > > +#endif > > if (has_zero(c, &data, &constants)) { > > data = prep_zero_mask(c, data, &constants); > > data = create_zero_mask(data); > > Kees mentioned the scenario where this crosses the page boundary and we > pad the source with zeros. It's probably fine but there are 70+ cases > where the strscpy() return value is checked, I only looked at a couple. The return value is the same with/without the patch, it's the number of bytes copied before the null terminator (i.e. not including the extra nulls now written). > Could we at least preserve the behaviour with regards to page boundaries > and keep the existing 'max' limiting logic? If I read the code > correctly, a fall back to reading one byte at a time from an unmapped > page would panic. We also get this behaviour if src[0] is reading from > an invalid address, though for arm64 the panic would be in > ex_handler_load_unaligned_zeropad() when count >= 8. So do you think that the code should continue to panic if the source string is unterminated because of a page boundary? I don't have a strong opinion but maybe that's something that we should only do if some error checking option is turned on? > Reading across tag granule (but not across page boundary) and causing a > tag check fault would result in padding but we can live with this and > only architectures that do MTE-style tag checking would get the new > behaviour. By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls now written to the destination? It seems unlikely that code would deliberately depend on the nulls not being written, the number of nulls written is not part of the documented interface contract and will vary right now depending on how close the source string is to a page boundary. If code is accidentally depending on nulls not being written, that's almost certainly a bug anyway (because of the page boundary thing) and we should fix it if discovered by this change. > What I haven't checked is whether a tag check fault in > ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for > MTE (it would be a second tag check fault while processing the first). > At a quick look, it seems ok but it might be worth checking. Yes, that works, and I added a test case for that in v5. The stack trace looks like this: [ 21.969736] Call trace: [ 21.969739] show_stack+0x18/0x24 (C) [ 21.969756] __dump_stack+0x28/0x38 [ 21.969764] dump_stack_lvl+0x54/0x6c [ 21.969770] print_address_description+0x7c/0x274 [ 21.969780] print_report+0x90/0xe8 [ 21.969789] kasan_report+0xf0/0x150 [ 21.969799] __do_kernel_fault+0x5c/0x1cc [ 21.969808] do_bad_area+0x30/0xec [ 21.969816] do_tag_check_fault+0x20/0x30 [ 21.969824] do_mem_abort+0x3c/0x8c [ 21.969832] el1_abort+0x3c/0x5c [ 21.969840] el1h_64_sync_handler+0x50/0xcc [ 21.969847] el1h_64_sync+0x6c/0x70 [ 21.969854] fixup_exception+0xb0/0xe4 (P) [ 21.969865] __do_kernel_fault+0x80/0x1cc [ 21.969873] do_bad_area+0x30/0xec [ 21.969881] do_tag_check_fault+0x20/0x30 [ 21.969889] do_mem_abort+0x3c/0x8c [ 21.969896] el1_abort+0x3c/0x5c [ 21.969905] el1h_64_sync_handler+0x50/0xcc [ 21.969912] el1h_64_sync+0x6c/0x70 [ 21.969917] sized_strscpy+0x30/0x114 (P) [ 21.969929] kunit_try_run_case+0x64/0x160 [ 21.969939] kunit_generic_run_threadfn_adapter+0x28/0x4c [ 21.969950] kthread+0x1c4/0x208 [ 21.969956] ret_from_fork+0x10/0x20 Peter ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() 2025-04-03 0:08 ` Peter Collingbourne @ 2025-04-03 9:46 ` Catalin Marinas 2025-04-03 21:15 ` David Laight 1 sibling, 0 replies; 7+ messages in thread From: Catalin Marinas @ 2025-04-03 9:46 UTC (permalink / raw) To: Peter Collingbourne Cc: Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Mark Rutland, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel, stable On Wed, Apr 02, 2025 at 05:08:51PM -0700, Peter Collingbourne wrote: > On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote: > > > diff --git a/lib/string.c b/lib/string.c > > > index eb4486ed40d25..b632c71df1a50 100644 > > > --- a/lib/string.c > > > +++ b/lib/string.c > > > @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > > if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) > > > return -E2BIG; > > > > > > +#ifndef CONFIG_DCACHE_WORD_ACCESS > > > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > > > /* > > > * If src is unaligned, don't cross a page boundary, > > > @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > > /* If src or dest is unaligned, don't do word-at-a-time. */ > > > if (((long) dest | (long) src) & (sizeof(long) - 1)) > > > max = 0; > > > +#endif > > > #endif > > > > > > /* > > > - * read_word_at_a_time() below may read uninitialized bytes after the > > > - * trailing zero and use them in comparisons. Disable this optimization > > > - * under KMSAN to prevent false positive reports. > > > + * load_unaligned_zeropad() or read_word_at_a_time() below may read > > > + * uninitialized bytes after the trailing zero and use them in > > > + * comparisons. Disable this optimization under KMSAN to prevent > > > + * false positive reports. > > > */ > > > if (IS_ENABLED(CONFIG_KMSAN)) > > > max = 0; > > > @@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) > > > while (max >= sizeof(unsigned long)) { > > > unsigned long c, data; > > > > > > +#ifdef CONFIG_DCACHE_WORD_ACCESS > > > + c = load_unaligned_zeropad(src+res); > > > +#else > > > c = read_word_at_a_time(src+res); > > > +#endif > > > if (has_zero(c, &data, &constants)) { > > > data = prep_zero_mask(c, data, &constants); > > > data = create_zero_mask(data); > > > > Kees mentioned the scenario where this crosses the page boundary and we > > pad the source with zeros. It's probably fine but there are 70+ cases > > where the strscpy() return value is checked, I only looked at a couple. > > The return value is the same with/without the patch, it's the number > of bytes copied before the null terminator (i.e. not including the > extra nulls now written). I was thinking of the -E2BIG return but you are right, the patch wouldn't change this. If, for example, you read 8 bytes across a page boundary and it faults, load_unaligned_zeropad() returns fewer characters copied, implying the source was null-terminated. read_word_at_a_time(), OTOH, panics in the next byte-at-a-time loop. But it wouldn't return -E2BIG either, so it doesn't matter for the caller. > > Could we at least preserve the behaviour with regards to page boundaries > > and keep the existing 'max' limiting logic? If I read the code > > correctly, a fall back to reading one byte at a time from an unmapped > > page would panic. We also get this behaviour if src[0] is reading from > > an invalid address, though for arm64 the panic would be in > > ex_handler_load_unaligned_zeropad() when count >= 8. > > So do you think that the code should continue to panic if the source > string is unterminated because of a page boundary? I don't have a > strong opinion but maybe that's something that we should only do if > some error checking option is turned on? It's mostly about keeping the current behaviour w.r.t. page boundaries. Not a strong opinion either. The change would be to not read across page boundaries. > > Reading across tag granule (but not across page boundary) and causing a > > tag check fault would result in padding but we can live with this and > > only architectures that do MTE-style tag checking would get the new > > behaviour. > > By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls > now written to the destination? No, I meant the padding of the source when a fault occurs. The write to the destination would only be a single '\0' byte. It's the destination safe termination vs. panic above. > > What I haven't checked is whether a tag check fault in > > ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for > > MTE (it would be a second tag check fault while processing the first). > > At a quick look, it seems ok but it might be worth checking. > > Yes, that works, and I added a test case for that in v5. The stack > trace looks like this: Thanks for checking. -- Catalin ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() 2025-04-03 0:08 ` Peter Collingbourne 2025-04-03 9:46 ` Catalin Marinas @ 2025-04-03 21:15 ` David Laight 1 sibling, 0 replies; 7+ messages in thread From: David Laight @ 2025-04-03 21:15 UTC (permalink / raw) To: Peter Collingbourne Cc: Catalin Marinas, Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Mark Rutland, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel, stable On Wed, 2 Apr 2025 17:08:51 -0700 Peter Collingbourne <pcc@google.com> wrote: > On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas <catalin.marinas@arm.com> wrote: .. > > Reading across tag granule (but not across page boundary) and causing a > > tag check fault would result in padding but we can live with this and > > only architectures that do MTE-style tag checking would get the new > > behaviour. > > By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls > now written to the destination? It seems unlikely that code would > deliberately depend on the nulls not being written, the number of > nulls written is not part of the documented interface contract and > will vary right now depending on how close the source string is to a > page boundary. If code is accidentally depending on nulls not being > written, that's almost certainly a bug anyway (because of the page > boundary thing) and we should fix it if discovered by this change. There was an issue with one of the copy routines writing beyond the expected point in a destination buffer. I can't remember the full details, but it would match strscpy(). David ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v4 2/2] kasan: Add strscpy() test to trigger tag fault on arm64 2025-03-29 0:03 [PATCH v4 0/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() Peter Collingbourne 2025-03-29 0:03 ` [PATCH v4 1/2] " Peter Collingbourne @ 2025-03-29 0:03 ` Peter Collingbourne 1 sibling, 0 replies; 7+ messages in thread From: Peter Collingbourne @ 2025-03-29 0:03 UTC (permalink / raw) To: Alexander Viro, Christian Brauner, Jan Kara, Andrew Morton, Kees Cook, Andy Shevchenko, Andrey Konovalov, Catalin Marinas, Mark Rutland Cc: Vincenzo Frascino, linux-fsdevel, linux-kernel, linux-hardening, linux-arm-kernel, Will Deacon, Peter Collingbourne From: Vincenzo Frascino <vincenzo.frascino@arm.com> When we invoke strscpy() with a maximum size of N bytes, it assumes that: - It can always read N bytes from the source. - It always write N bytes (zero-padded) to the destination. On aarch64 with Memory Tagging Extension enabled if we pass an N that is bigger then the source buffer, it would previously trigger an MTE fault. Implement a KASAN KUnit test that triggers the issue with the previous implementation of read_word_at_a_time() on aarch64 with MTE enabled. Cc: Will Deacon <will@kernel.org> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Co-developed-by: Peter Collingbourne <pcc@google.com> Signed-off-by: Peter Collingbourne <pcc@google.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://linux-review.googlesource.com/id/If88e396b9e7c058c1a4b5a252274120e77b1898a --- v4: - clarify commit message - improve comment v3: - simplify test case v2: - rebased - fixed test failure mm/kasan/kasan_test_c.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c index 59d673400085f..655356df71fe6 100644 --- a/mm/kasan/kasan_test_c.c +++ b/mm/kasan/kasan_test_c.c @@ -1570,6 +1570,7 @@ static void kasan_memcmp(struct kunit *test) static void kasan_strings(struct kunit *test) { char *ptr; + char *src; size_t size = 24; /* @@ -1581,6 +1582,21 @@ static void kasan_strings(struct kunit *test) ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + src = kmalloc(KASAN_GRANULE_SIZE, GFP_KERNEL | __GFP_ZERO); + strscpy(src, "f0cacc1a0000000", KASAN_GRANULE_SIZE); + + /* + * Make sure that strscpy() does not trigger KASAN if it overreads into + * poisoned memory. + * + * The expected size does not include the terminator '\0' + * so it is (KASAN_GRANULE_SIZE - 2) == + * KASAN_GRANULE_SIZE - ("initial removed character" + "\0"). + */ + KUNIT_EXPECT_EQ(test, KASAN_GRANULE_SIZE - 2, + strscpy(ptr, src + 1, KASAN_GRANULE_SIZE)); + + kfree(src); kfree(ptr); /* -- 2.49.0.472.ge94155a9ec-goog ^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-04-03 21:17 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-03-29 0:03 [PATCH v4 0/2] string: Add load_unaligned_zeropad() code path to sized_strscpy() Peter Collingbourne 2025-03-29 0:03 ` [PATCH v4 1/2] " Peter Collingbourne 2025-04-02 20:10 ` Catalin Marinas 2025-04-03 0:08 ` Peter Collingbourne 2025-04-03 9:46 ` Catalin Marinas 2025-04-03 21:15 ` David Laight 2025-03-29 0:03 ` [PATCH v4 2/2] kasan: Add strscpy() test to trigger tag fault on arm64 Peter Collingbourne
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).