* [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests
@ 2024-08-01 3:36 Samuel Holland
2024-08-01 3:36 ` [PATCH 1/2] riscv: Omit optimized string routines when using KASAN Samuel Holland
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Samuel Holland @ 2024-08-01 3:36 UTC (permalink / raw)
To: Palmer Dabbelt, linux-riscv
Cc: Yury Norov, Rasmus Villemoes, linux-kernel, Samuel Holland
This series fixes two areas where uninstrumented assembly routines
caused gaps in KASAN coverage on RISC-V, which were caught by KUnit
tests. The KASAN KUnit test suite passes after applying this series.
This series fixes the following test failures:
# kasan_strings: EXPECTATION FAILED at mm/kasan/kasan_test.c:1520
KASAN failure expected in "kasan_int_result = strcmp(ptr, "2")", but none occurred
# kasan_strings: EXPECTATION FAILED at mm/kasan/kasan_test.c:1524
KASAN failure expected in "kasan_int_result = strlen(ptr)", but none occurred
not ok 60 kasan_strings
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1531
KASAN failure expected in "set_bit(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1533
KASAN failure expected in "clear_bit(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1535
KASAN failure expected in "clear_bit_unlock(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1536
KASAN failure expected in "__clear_bit_unlock(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1537
KASAN failure expected in "change_bit(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1543
KASAN failure expected in "test_and_set_bit(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1545
KASAN failure expected in "test_and_set_bit_lock(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1546
KASAN failure expected in "test_and_clear_bit(nr, addr)", but none occurred
# kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1548
KASAN failure expected in "test_and_change_bit(nr, addr)", but none occurred
not ok 61 kasan_bitops_generic
Samuel Holland (2):
riscv: Omit optimized string routines when using KASAN
riscv: Enable bitops instrumentation
arch/riscv/include/asm/bitops.h | 43 ++++++++++++++++++---------------
arch/riscv/include/asm/string.h | 2 ++
arch/riscv/kernel/riscv_ksyms.c | 3 ---
arch/riscv/lib/Makefile | 2 ++
arch/riscv/lib/strcmp.S | 1 +
arch/riscv/lib/strlen.S | 1 +
arch/riscv/lib/strncmp.S | 1 +
arch/riscv/purgatory/Makefile | 2 ++
8 files changed, 32 insertions(+), 23 deletions(-)
--
2.45.1
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH 1/2] riscv: Omit optimized string routines when using KASAN 2024-08-01 3:36 [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests Samuel Holland @ 2024-08-01 3:36 ` Samuel Holland 2024-08-01 10:26 ` Alexandre Ghiti 2024-08-01 3:37 ` [PATCH 2/2] riscv: Enable bitops instrumentation Samuel Holland 2024-09-20 8:20 ` [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests patchwork-bot+linux-riscv 2 siblings, 1 reply; 7+ messages in thread From: Samuel Holland @ 2024-08-01 3:36 UTC (permalink / raw) To: Palmer Dabbelt, linux-riscv Cc: Yury Norov, Rasmus Villemoes, linux-kernel, Samuel Holland The optimized string routines are implemented in assembly, so they are not instrumented for use with KASAN. Fall back to the C version of the routines in order to improve KASAN coverage. This fixes the kasan_strings() unit test. Signed-off-by: Samuel Holland <samuel.holland@sifive.com> --- arch/riscv/include/asm/string.h | 2 ++ arch/riscv/kernel/riscv_ksyms.c | 3 --- arch/riscv/lib/Makefile | 2 ++ arch/riscv/lib/strcmp.S | 1 + arch/riscv/lib/strlen.S | 1 + arch/riscv/lib/strncmp.S | 1 + arch/riscv/purgatory/Makefile | 2 ++ 7 files changed, 9 insertions(+), 3 deletions(-) diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h index a96b1fea24fe..5ba77f60bf0b 100644 --- a/arch/riscv/include/asm/string.h +++ b/arch/riscv/include/asm/string.h @@ -19,6 +19,7 @@ extern asmlinkage void *__memcpy(void *, const void *, size_t); extern asmlinkage void *memmove(void *, const void *, size_t); extern asmlinkage void *__memmove(void *, const void *, size_t); +#if !(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) #define __HAVE_ARCH_STRCMP extern asmlinkage int strcmp(const char *cs, const char *ct); @@ -27,6 +28,7 @@ extern asmlinkage __kernel_size_t strlen(const char *); #define __HAVE_ARCH_STRNCMP extern asmlinkage int strncmp(const char *cs, const char *ct, size_t count); +#endif /* For those files which don't want to check by kasan. */ #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c index a72879b4249a..5ab1c7e1a6ed 100644 --- a/arch/riscv/kernel/riscv_ksyms.c +++ b/arch/riscv/kernel/riscv_ksyms.c @@ -12,9 +12,6 @@ EXPORT_SYMBOL(memset); EXPORT_SYMBOL(memcpy); EXPORT_SYMBOL(memmove); -EXPORT_SYMBOL(strcmp); -EXPORT_SYMBOL(strlen); -EXPORT_SYMBOL(strncmp); EXPORT_SYMBOL(__memset); EXPORT_SYMBOL(__memcpy); EXPORT_SYMBOL(__memmove); diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile index 2b369f51b0a5..8eec6b69a875 100644 --- a/arch/riscv/lib/Makefile +++ b/arch/riscv/lib/Makefile @@ -3,9 +3,11 @@ lib-y += delay.o lib-y += memcpy.o lib-y += memset.o lib-y += memmove.o +ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),) lib-y += strcmp.o lib-y += strlen.o lib-y += strncmp.o +endif lib-y += csum.o ifeq ($(CONFIG_MMU), y) lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o diff --git a/arch/riscv/lib/strcmp.S b/arch/riscv/lib/strcmp.S index 687b2bea5c43..542301a67a2f 100644 --- a/arch/riscv/lib/strcmp.S +++ b/arch/riscv/lib/strcmp.S @@ -120,3 +120,4 @@ strcmp_zbb: .option pop #endif SYM_FUNC_END(strcmp) +EXPORT_SYMBOL(strcmp) diff --git a/arch/riscv/lib/strlen.S b/arch/riscv/lib/strlen.S index 8ae3064e45ff..962983b73251 100644 --- a/arch/riscv/lib/strlen.S +++ b/arch/riscv/lib/strlen.S @@ -131,3 +131,4 @@ strlen_zbb: #endif SYM_FUNC_END(strlen) SYM_FUNC_ALIAS(__pi_strlen, strlen) +EXPORT_SYMBOL(strlen) diff --git a/arch/riscv/lib/strncmp.S b/arch/riscv/lib/strncmp.S index aba5b3148621..0f359ea2f55b 100644 --- a/arch/riscv/lib/strncmp.S +++ b/arch/riscv/lib/strncmp.S @@ -136,3 +136,4 @@ strncmp_zbb: .option pop #endif SYM_FUNC_END(strncmp) +EXPORT_SYMBOL(strncmp) diff --git a/arch/riscv/purgatory/Makefile b/arch/riscv/purgatory/Makefile index f11945ee2490..fb9c917c9b45 100644 --- a/arch/riscv/purgatory/Makefile +++ b/arch/riscv/purgatory/Makefile @@ -1,7 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 purgatory-y := purgatory.o sha256.o entry.o string.o ctype.o memcpy.o memset.o +ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),) purgatory-y += strcmp.o strlen.o strncmp.o +endif targets += $(purgatory-y) PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) -- 2.45.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] riscv: Omit optimized string routines when using KASAN 2024-08-01 3:36 ` [PATCH 1/2] riscv: Omit optimized string routines when using KASAN Samuel Holland @ 2024-08-01 10:26 ` Alexandre Ghiti 2024-08-14 9:06 ` Samuel Holland 0 siblings, 1 reply; 7+ messages in thread From: Alexandre Ghiti @ 2024-08-01 10:26 UTC (permalink / raw) To: Samuel Holland, Palmer Dabbelt, linux-riscv Cc: Yury Norov, Rasmus Villemoes, linux-kernel Hi Samuel, On 01/08/2024 05:36, Samuel Holland wrote: > The optimized string routines are implemented in assembly, so they are > not instrumented for use with KASAN. Fall back to the C version of the > routines in order to improve KASAN coverage. This fixes the > kasan_strings() unit test. > > Signed-off-by: Samuel Holland <samuel.holland@sifive.com> > --- > > arch/riscv/include/asm/string.h | 2 ++ > arch/riscv/kernel/riscv_ksyms.c | 3 --- > arch/riscv/lib/Makefile | 2 ++ > arch/riscv/lib/strcmp.S | 1 + > arch/riscv/lib/strlen.S | 1 + > arch/riscv/lib/strncmp.S | 1 + > arch/riscv/purgatory/Makefile | 2 ++ > 7 files changed, 9 insertions(+), 3 deletions(-) > > diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h > index a96b1fea24fe..5ba77f60bf0b 100644 > --- a/arch/riscv/include/asm/string.h > +++ b/arch/riscv/include/asm/string.h > @@ -19,6 +19,7 @@ extern asmlinkage void *__memcpy(void *, const void *, size_t); > extern asmlinkage void *memmove(void *, const void *, size_t); > extern asmlinkage void *__memmove(void *, const void *, size_t); > > +#if !(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) We do not support KASAN_SW_TAGS so there is no need for this #ifdef. > #define __HAVE_ARCH_STRCMP > extern asmlinkage int strcmp(const char *cs, const char *ct); > > @@ -27,6 +28,7 @@ extern asmlinkage __kernel_size_t strlen(const char *); > > #define __HAVE_ARCH_STRNCMP > extern asmlinkage int strncmp(const char *cs, const char *ct, size_t count); > +#endif > > /* For those files which don't want to check by kasan. */ > #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) > diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c > index a72879b4249a..5ab1c7e1a6ed 100644 > --- a/arch/riscv/kernel/riscv_ksyms.c > +++ b/arch/riscv/kernel/riscv_ksyms.c > @@ -12,9 +12,6 @@ > EXPORT_SYMBOL(memset); > EXPORT_SYMBOL(memcpy); > EXPORT_SYMBOL(memmove); > -EXPORT_SYMBOL(strcmp); > -EXPORT_SYMBOL(strlen); > -EXPORT_SYMBOL(strncmp); > EXPORT_SYMBOL(__memset); > EXPORT_SYMBOL(__memcpy); > EXPORT_SYMBOL(__memmove); > diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile > index 2b369f51b0a5..8eec6b69a875 100644 > --- a/arch/riscv/lib/Makefile > +++ b/arch/riscv/lib/Makefile > @@ -3,9 +3,11 @@ lib-y += delay.o > lib-y += memcpy.o > lib-y += memset.o > lib-y += memmove.o > +ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),) > lib-y += strcmp.o > lib-y += strlen.o > lib-y += strncmp.o > +endif > lib-y += csum.o > ifeq ($(CONFIG_MMU), y) > lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o > diff --git a/arch/riscv/lib/strcmp.S b/arch/riscv/lib/strcmp.S > index 687b2bea5c43..542301a67a2f 100644 > --- a/arch/riscv/lib/strcmp.S > +++ b/arch/riscv/lib/strcmp.S > @@ -120,3 +120,4 @@ strcmp_zbb: > .option pop > #endif > SYM_FUNC_END(strcmp) > +EXPORT_SYMBOL(strcmp) > diff --git a/arch/riscv/lib/strlen.S b/arch/riscv/lib/strlen.S > index 8ae3064e45ff..962983b73251 100644 > --- a/arch/riscv/lib/strlen.S > +++ b/arch/riscv/lib/strlen.S > @@ -131,3 +131,4 @@ strlen_zbb: > #endif > SYM_FUNC_END(strlen) > SYM_FUNC_ALIAS(__pi_strlen, strlen) > +EXPORT_SYMBOL(strlen) > diff --git a/arch/riscv/lib/strncmp.S b/arch/riscv/lib/strncmp.S > index aba5b3148621..0f359ea2f55b 100644 > --- a/arch/riscv/lib/strncmp.S > +++ b/arch/riscv/lib/strncmp.S > @@ -136,3 +136,4 @@ strncmp_zbb: > .option pop > #endif > SYM_FUNC_END(strncmp) > +EXPORT_SYMBOL(strncmp) > diff --git a/arch/riscv/purgatory/Makefile b/arch/riscv/purgatory/Makefile > index f11945ee2490..fb9c917c9b45 100644 > --- a/arch/riscv/purgatory/Makefile > +++ b/arch/riscv/purgatory/Makefile > @@ -1,7 +1,9 @@ > # SPDX-License-Identifier: GPL-2.0 > > purgatory-y := purgatory.o sha256.o entry.o string.o ctype.o memcpy.o memset.o > +ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),) > purgatory-y += strcmp.o strlen.o strncmp.o > +endif > > targets += $(purgatory-y) > PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) With the removal of KASAN_SW_TAGS, you can add: Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> And since I have this testsuite in my CI, I gave it a try and it works so: Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Thanks, Alex _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 1/2] riscv: Omit optimized string routines when using KASAN 2024-08-01 10:26 ` Alexandre Ghiti @ 2024-08-14 9:06 ` Samuel Holland 0 siblings, 0 replies; 7+ messages in thread From: Samuel Holland @ 2024-08-14 9:06 UTC (permalink / raw) To: Alexandre Ghiti, Palmer Dabbelt, linux-riscv Cc: Yury Norov, Rasmus Villemoes, linux-kernel Hi Alex, On 2024-08-01 5:26 AM, Alexandre Ghiti wrote: > Hi Samuel, > > On 01/08/2024 05:36, Samuel Holland wrote: >> The optimized string routines are implemented in assembly, so they are >> not instrumented for use with KASAN. Fall back to the C version of the >> routines in order to improve KASAN coverage. This fixes the >> kasan_strings() unit test. >> >> Signed-off-by: Samuel Holland <samuel.holland@sifive.com> >> --- >> >> arch/riscv/include/asm/string.h | 2 ++ >> arch/riscv/kernel/riscv_ksyms.c | 3 --- >> arch/riscv/lib/Makefile | 2 ++ >> arch/riscv/lib/strcmp.S | 1 + >> arch/riscv/lib/strlen.S | 1 + >> arch/riscv/lib/strncmp.S | 1 + >> arch/riscv/purgatory/Makefile | 2 ++ >> 7 files changed, 9 insertions(+), 3 deletions(-) >> >> diff --git a/arch/riscv/include/asm/string.h b/arch/riscv/include/asm/string.h >> index a96b1fea24fe..5ba77f60bf0b 100644 >> --- a/arch/riscv/include/asm/string.h >> +++ b/arch/riscv/include/asm/string.h >> @@ -19,6 +19,7 @@ extern asmlinkage void *__memcpy(void *, const void *, size_t); >> extern asmlinkage void *memmove(void *, const void *, size_t); >> extern asmlinkage void *__memmove(void *, const void *, size_t); >> +#if !(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) > > > We do not support KASAN_SW_TAGS so there is no need for this #ifdef. I just sent an RFC implementation of KASAN_SW_TAGS[1] (which you wouldn't have known I was working on at the time :) ). Since these changes will be needed for both modes, it made sense to me to go ahead and cover both at once. Regards, Samuel [1]: https://lore.kernel.org/all/20240814085618.968833-1-samuel.holland@sifive.com/ >> #define __HAVE_ARCH_STRCMP >> extern asmlinkage int strcmp(const char *cs, const char *ct); >> @@ -27,6 +28,7 @@ extern asmlinkage __kernel_size_t strlen(const char *); >> #define __HAVE_ARCH_STRNCMP >> extern asmlinkage int strncmp(const char *cs, const char *ct, size_t count); >> +#endif >> /* For those files which don't want to check by kasan. */ >> #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) >> diff --git a/arch/riscv/kernel/riscv_ksyms.c b/arch/riscv/kernel/riscv_ksyms.c >> index a72879b4249a..5ab1c7e1a6ed 100644 >> --- a/arch/riscv/kernel/riscv_ksyms.c >> +++ b/arch/riscv/kernel/riscv_ksyms.c >> @@ -12,9 +12,6 @@ >> EXPORT_SYMBOL(memset); >> EXPORT_SYMBOL(memcpy); >> EXPORT_SYMBOL(memmove); >> -EXPORT_SYMBOL(strcmp); >> -EXPORT_SYMBOL(strlen); >> -EXPORT_SYMBOL(strncmp); >> EXPORT_SYMBOL(__memset); >> EXPORT_SYMBOL(__memcpy); >> EXPORT_SYMBOL(__memmove); >> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile >> index 2b369f51b0a5..8eec6b69a875 100644 >> --- a/arch/riscv/lib/Makefile >> +++ b/arch/riscv/lib/Makefile >> @@ -3,9 +3,11 @@ lib-y += delay.o >> lib-y += memcpy.o >> lib-y += memset.o >> lib-y += memmove.o >> +ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),) >> lib-y += strcmp.o >> lib-y += strlen.o >> lib-y += strncmp.o >> +endif >> lib-y += csum.o >> ifeq ($(CONFIG_MMU), y) >> lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o >> diff --git a/arch/riscv/lib/strcmp.S b/arch/riscv/lib/strcmp.S >> index 687b2bea5c43..542301a67a2f 100644 >> --- a/arch/riscv/lib/strcmp.S >> +++ b/arch/riscv/lib/strcmp.S >> @@ -120,3 +120,4 @@ strcmp_zbb: >> .option pop >> #endif >> SYM_FUNC_END(strcmp) >> +EXPORT_SYMBOL(strcmp) >> diff --git a/arch/riscv/lib/strlen.S b/arch/riscv/lib/strlen.S >> index 8ae3064e45ff..962983b73251 100644 >> --- a/arch/riscv/lib/strlen.S >> +++ b/arch/riscv/lib/strlen.S >> @@ -131,3 +131,4 @@ strlen_zbb: >> #endif >> SYM_FUNC_END(strlen) >> SYM_FUNC_ALIAS(__pi_strlen, strlen) >> +EXPORT_SYMBOL(strlen) >> diff --git a/arch/riscv/lib/strncmp.S b/arch/riscv/lib/strncmp.S >> index aba5b3148621..0f359ea2f55b 100644 >> --- a/arch/riscv/lib/strncmp.S >> +++ b/arch/riscv/lib/strncmp.S >> @@ -136,3 +136,4 @@ strncmp_zbb: >> .option pop >> #endif >> SYM_FUNC_END(strncmp) >> +EXPORT_SYMBOL(strncmp) >> diff --git a/arch/riscv/purgatory/Makefile b/arch/riscv/purgatory/Makefile >> index f11945ee2490..fb9c917c9b45 100644 >> --- a/arch/riscv/purgatory/Makefile >> +++ b/arch/riscv/purgatory/Makefile >> @@ -1,7 +1,9 @@ >> # SPDX-License-Identifier: GPL-2.0 >> purgatory-y := purgatory.o sha256.o entry.o string.o ctype.o memcpy.o >> memset.o >> +ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),) >> purgatory-y += strcmp.o strlen.o strncmp.o >> +endif >> targets += $(purgatory-y) >> PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) > > > With the removal of KASAN_SW_TAGS, you can add: > > Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> > > And since I have this testsuite in my CI, I gave it a try and it works so: > > Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> > > Thanks, > > Alex > > > _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/2] riscv: Enable bitops instrumentation 2024-08-01 3:36 [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests Samuel Holland 2024-08-01 3:36 ` [PATCH 1/2] riscv: Omit optimized string routines when using KASAN Samuel Holland @ 2024-08-01 3:37 ` Samuel Holland 2024-08-01 10:31 ` Alexandre Ghiti 2024-09-20 8:20 ` [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests patchwork-bot+linux-riscv 2 siblings, 1 reply; 7+ messages in thread From: Samuel Holland @ 2024-08-01 3:37 UTC (permalink / raw) To: Palmer Dabbelt, linux-riscv Cc: Yury Norov, Rasmus Villemoes, linux-kernel, Samuel Holland Instead of implementing the bitops functions directly in assembly, provide the arch_-prefixed versions and use the wrappers from asm-generic to add instrumentation. This improves KASAN coverage and fixes the kasan_bitops_generic() unit test. Signed-off-by: Samuel Holland <samuel.holland@sifive.com> --- arch/riscv/include/asm/bitops.h | 43 ++++++++++++++++++--------------- 1 file changed, 23 insertions(+), 20 deletions(-) diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h index 71af9ecfcfcb..fae152ea0508 100644 --- a/arch/riscv/include/asm/bitops.h +++ b/arch/riscv/include/asm/bitops.h @@ -222,44 +222,44 @@ static __always_inline int variable_fls(unsigned int x) #define __NOT(x) (~(x)) /** - * test_and_set_bit - Set a bit and return its old value + * arch_test_and_set_bit - Set a bit and return its old value * @nr: Bit to set * @addr: Address to count from * * This operation may be reordered on other architectures than x86. */ -static inline int test_and_set_bit(int nr, volatile unsigned long *addr) +static inline int arch_test_and_set_bit(int nr, volatile unsigned long *addr) { return __test_and_op_bit(or, __NOP, nr, addr); } /** - * test_and_clear_bit - Clear a bit and return its old value + * arch_test_and_clear_bit - Clear a bit and return its old value * @nr: Bit to clear * @addr: Address to count from * * This operation can be reordered on other architectures other than x86. */ -static inline int test_and_clear_bit(int nr, volatile unsigned long *addr) +static inline int arch_test_and_clear_bit(int nr, volatile unsigned long *addr) { return __test_and_op_bit(and, __NOT, nr, addr); } /** - * test_and_change_bit - Change a bit and return its old value + * arch_test_and_change_bit - Change a bit and return its old value * @nr: Bit to change * @addr: Address to count from * * This operation is atomic and cannot be reordered. * It also implies a memory barrier. */ -static inline int test_and_change_bit(int nr, volatile unsigned long *addr) +static inline int arch_test_and_change_bit(int nr, volatile unsigned long *addr) { return __test_and_op_bit(xor, __NOP, nr, addr); } /** - * set_bit - Atomically set a bit in memory + * arch_set_bit - Atomically set a bit in memory * @nr: the bit to set * @addr: the address to start counting from * @@ -270,13 +270,13 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr) * Note that @nr may be almost arbitrarily large; this function is not * restricted to acting on a single-word quantity. */ -static inline void set_bit(int nr, volatile unsigned long *addr) +static inline void arch_set_bit(int nr, volatile unsigned long *addr) { __op_bit(or, __NOP, nr, addr); } /** - * clear_bit - Clears a bit in memory + * arch_clear_bit - Clears a bit in memory * @nr: Bit to clear * @addr: Address to start counting from * @@ -284,13 +284,13 @@ static inline void set_bit(int nr, volatile unsigned long *addr) * on non x86 architectures, so if you are writing portable code, * make sure not to rely on its reordering guarantees. */ -static inline void clear_bit(int nr, volatile unsigned long *addr) +static inline void arch_clear_bit(int nr, volatile unsigned long *addr) { __op_bit(and, __NOT, nr, addr); } /** - * change_bit - Toggle a bit in memory + * arch_change_bit - Toggle a bit in memory * @nr: Bit to change * @addr: Address to start counting from * @@ -298,40 +298,40 @@ static inline void clear_bit(int nr, volatile unsigned long *addr) * Note that @nr may be almost arbitrarily large; this function is not * restricted to acting on a single-word quantity. */ -static inline void change_bit(int nr, volatile unsigned long *addr) +static inline void arch_change_bit(int nr, volatile unsigned long *addr) { __op_bit(xor, __NOP, nr, addr); } /** - * test_and_set_bit_lock - Set a bit and return its old value, for lock + * arch_test_and_set_bit_lock - Set a bit and return its old value, for lock * @nr: Bit to set * @addr: Address to count from * * This operation is atomic and provides acquire barrier semantics. * It can be used to implement bit locks. */ -static inline int test_and_set_bit_lock( +static inline int arch_test_and_set_bit_lock( unsigned long nr, volatile unsigned long *addr) { return __test_and_op_bit_ord(or, __NOP, nr, addr, .aq); } /** - * clear_bit_unlock - Clear a bit in memory, for unlock + * arch_clear_bit_unlock - Clear a bit in memory, for unlock * @nr: the bit to set * @addr: the address to start counting from * * This operation is atomic and provides release barrier semantics. */ -static inline void clear_bit_unlock( +static inline void arch_clear_bit_unlock( unsigned long nr, volatile unsigned long *addr) { __op_bit_ord(and, __NOT, nr, addr, .rl); } /** - * __clear_bit_unlock - Clear a bit in memory, for unlock + * arch___clear_bit_unlock - Clear a bit in memory, for unlock * @nr: the bit to set * @addr: the address to start counting from * @@ -345,13 +345,13 @@ static inline void clear_bit_unlock( * non-atomic property here: it's a lot more instructions and we still have to * provide release semantics anyway. */ -static inline void __clear_bit_unlock( +static inline void arch___clear_bit_unlock( unsigned long nr, volatile unsigned long *addr) { - clear_bit_unlock(nr, addr); + arch_clear_bit_unlock(nr, addr); } -static inline bool xor_unlock_is_negative_byte(unsigned long mask, +static inline bool arch_xor_unlock_is_negative_byte(unsigned long mask, volatile unsigned long *addr) { unsigned long res; @@ -369,6 +369,9 @@ static inline bool xor_unlock_is_negative_byte(unsigned long mask, #undef __NOT #undef __AMO +#include <asm-generic/bitops/instrumented-atomic.h> +#include <asm-generic/bitops/instrumented-lock.h> + #include <asm-generic/bitops/non-atomic.h> #include <asm-generic/bitops/le.h> #include <asm-generic/bitops/ext2-atomic.h> -- 2.45.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH 2/2] riscv: Enable bitops instrumentation 2024-08-01 3:37 ` [PATCH 2/2] riscv: Enable bitops instrumentation Samuel Holland @ 2024-08-01 10:31 ` Alexandre Ghiti 0 siblings, 0 replies; 7+ messages in thread From: Alexandre Ghiti @ 2024-08-01 10:31 UTC (permalink / raw) To: Samuel Holland, Palmer Dabbelt, linux-riscv Cc: Yury Norov, Rasmus Villemoes, linux-kernel On 01/08/2024 05:37, Samuel Holland wrote: > Instead of implementing the bitops functions directly in assembly, > provide the arch_-prefixed versions and use the wrappers from > asm-generic to add instrumentation. This improves KASAN coverage and > fixes the kasan_bitops_generic() unit test. > > Signed-off-by: Samuel Holland <samuel.holland@sifive.com> > --- > > arch/riscv/include/asm/bitops.h | 43 ++++++++++++++++++--------------- > 1 file changed, 23 insertions(+), 20 deletions(-) > > diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h > index 71af9ecfcfcb..fae152ea0508 100644 > --- a/arch/riscv/include/asm/bitops.h > +++ b/arch/riscv/include/asm/bitops.h > @@ -222,44 +222,44 @@ static __always_inline int variable_fls(unsigned int x) > #define __NOT(x) (~(x)) > > /** > - * test_and_set_bit - Set a bit and return its old value > + * arch_test_and_set_bit - Set a bit and return its old value > * @nr: Bit to set > * @addr: Address to count from > * > * This operation may be reordered on other architectures than x86. > */ > -static inline int test_and_set_bit(int nr, volatile unsigned long *addr) > +static inline int arch_test_and_set_bit(int nr, volatile unsigned long *addr) > { > return __test_and_op_bit(or, __NOP, nr, addr); > } > > /** > - * test_and_clear_bit - Clear a bit and return its old value > + * arch_test_and_clear_bit - Clear a bit and return its old value > * @nr: Bit to clear > * @addr: Address to count from > * > * This operation can be reordered on other architectures other than x86. > */ > -static inline int test_and_clear_bit(int nr, volatile unsigned long *addr) > +static inline int arch_test_and_clear_bit(int nr, volatile unsigned long *addr) > { > return __test_and_op_bit(and, __NOT, nr, addr); > } > > /** > - * test_and_change_bit - Change a bit and return its old value > + * arch_test_and_change_bit - Change a bit and return its old value > * @nr: Bit to change > * @addr: Address to count from > * > * This operation is atomic and cannot be reordered. > * It also implies a memory barrier. > */ > -static inline int test_and_change_bit(int nr, volatile unsigned long *addr) > +static inline int arch_test_and_change_bit(int nr, volatile unsigned long *addr) > { > return __test_and_op_bit(xor, __NOP, nr, addr); > } > > /** > - * set_bit - Atomically set a bit in memory > + * arch_set_bit - Atomically set a bit in memory > * @nr: the bit to set > * @addr: the address to start counting from > * > @@ -270,13 +270,13 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr) > * Note that @nr may be almost arbitrarily large; this function is not > * restricted to acting on a single-word quantity. > */ > -static inline void set_bit(int nr, volatile unsigned long *addr) > +static inline void arch_set_bit(int nr, volatile unsigned long *addr) > { > __op_bit(or, __NOP, nr, addr); > } > > /** > - * clear_bit - Clears a bit in memory > + * arch_clear_bit - Clears a bit in memory > * @nr: Bit to clear > * @addr: Address to start counting from > * > @@ -284,13 +284,13 @@ static inline void set_bit(int nr, volatile unsigned long *addr) > * on non x86 architectures, so if you are writing portable code, > * make sure not to rely on its reordering guarantees. > */ > -static inline void clear_bit(int nr, volatile unsigned long *addr) > +static inline void arch_clear_bit(int nr, volatile unsigned long *addr) > { > __op_bit(and, __NOT, nr, addr); > } > > /** > - * change_bit - Toggle a bit in memory > + * arch_change_bit - Toggle a bit in memory > * @nr: Bit to change > * @addr: Address to start counting from > * > @@ -298,40 +298,40 @@ static inline void clear_bit(int nr, volatile unsigned long *addr) > * Note that @nr may be almost arbitrarily large; this function is not > * restricted to acting on a single-word quantity. > */ > -static inline void change_bit(int nr, volatile unsigned long *addr) > +static inline void arch_change_bit(int nr, volatile unsigned long *addr) > { > __op_bit(xor, __NOP, nr, addr); > } > > /** > - * test_and_set_bit_lock - Set a bit and return its old value, for lock > + * arch_test_and_set_bit_lock - Set a bit and return its old value, for lock > * @nr: Bit to set > * @addr: Address to count from > * > * This operation is atomic and provides acquire barrier semantics. > * It can be used to implement bit locks. > */ > -static inline int test_and_set_bit_lock( > +static inline int arch_test_and_set_bit_lock( > unsigned long nr, volatile unsigned long *addr) > { > return __test_and_op_bit_ord(or, __NOP, nr, addr, .aq); > } > > /** > - * clear_bit_unlock - Clear a bit in memory, for unlock > + * arch_clear_bit_unlock - Clear a bit in memory, for unlock > * @nr: the bit to set > * @addr: the address to start counting from > * > * This operation is atomic and provides release barrier semantics. > */ > -static inline void clear_bit_unlock( > +static inline void arch_clear_bit_unlock( > unsigned long nr, volatile unsigned long *addr) > { > __op_bit_ord(and, __NOT, nr, addr, .rl); > } > > /** > - * __clear_bit_unlock - Clear a bit in memory, for unlock > + * arch___clear_bit_unlock - Clear a bit in memory, for unlock > * @nr: the bit to set > * @addr: the address to start counting from > * Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Tested-by: Alexandre Ghiti <alexghiti@rivosinc.com> Thanks, Alex > @@ -345,13 +345,13 @@ static inline void clear_bit_unlock( > * non-atomic property here: it's a lot more instructions and we still have to > * provide release semantics anyway. > */ > -static inline void __clear_bit_unlock( > +static inline void arch___clear_bit_unlock( > unsigned long nr, volatile unsigned long *addr) > { > - clear_bit_unlock(nr, addr); > + arch_clear_bit_unlock(nr, addr); > } > > -static inline bool xor_unlock_is_negative_byte(unsigned long mask, > +static inline bool arch_xor_unlock_is_negative_byte(unsigned long mask, > volatile unsigned long *addr) > { > unsigned long res; > @@ -369,6 +369,9 @@ static inline bool xor_unlock_is_negative_byte(unsigned long mask, > #undef __NOT > #undef __AMO > > +#include <asm-generic/bitops/instrumented-atomic.h> > +#include <asm-generic/bitops/instrumented-lock.h> > + > #include <asm-generic/bitops/non-atomic.h> > #include <asm-generic/bitops/le.h> > #include <asm-generic/bitops/ext2-atomic.h> _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests 2024-08-01 3:36 [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests Samuel Holland 2024-08-01 3:36 ` [PATCH 1/2] riscv: Omit optimized string routines when using KASAN Samuel Holland 2024-08-01 3:37 ` [PATCH 2/2] riscv: Enable bitops instrumentation Samuel Holland @ 2024-09-20 8:20 ` patchwork-bot+linux-riscv 2 siblings, 0 replies; 7+ messages in thread From: patchwork-bot+linux-riscv @ 2024-09-20 8:20 UTC (permalink / raw) To: Samuel Holland; +Cc: linux-riscv, palmer, yury.norov, linux, linux-kernel Hello: This series was applied to riscv/linux.git (for-next) by Palmer Dabbelt <palmer@rivosinc.com>: On Wed, 31 Jul 2024 20:36:58 -0700 you wrote: > This series fixes two areas where uninstrumented assembly routines > caused gaps in KASAN coverage on RISC-V, which were caught by KUnit > tests. The KASAN KUnit test suite passes after applying this series. > > This series fixes the following test failures: > # kasan_strings: EXPECTATION FAILED at mm/kasan/kasan_test.c:1520 > KASAN failure expected in "kasan_int_result = strcmp(ptr, "2")", but none occurred > # kasan_strings: EXPECTATION FAILED at mm/kasan/kasan_test.c:1524 > KASAN failure expected in "kasan_int_result = strlen(ptr)", but none occurred > not ok 60 kasan_strings > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1531 > KASAN failure expected in "set_bit(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1533 > KASAN failure expected in "clear_bit(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1535 > KASAN failure expected in "clear_bit_unlock(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1536 > KASAN failure expected in "__clear_bit_unlock(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1537 > KASAN failure expected in "change_bit(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1543 > KASAN failure expected in "test_and_set_bit(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1545 > KASAN failure expected in "test_and_set_bit_lock(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1546 > KASAN failure expected in "test_and_clear_bit(nr, addr)", but none occurred > # kasan_bitops_generic: EXPECTATION FAILED at mm/kasan/kasan_test.c:1548 > KASAN failure expected in "test_and_change_bit(nr, addr)", but none occurred > not ok 61 kasan_bitops_generic > > [...] Here is the summary with links: - [1/2] riscv: Omit optimized string routines when using KASAN https://git.kernel.org/riscv/c/58ff537109ac - [2/2] riscv: Enable bitops instrumentation https://git.kernel.org/riscv/c/77514915b72c You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-09-20 8:29 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-08-01 3:36 [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests Samuel Holland 2024-08-01 3:36 ` [PATCH 1/2] riscv: Omit optimized string routines when using KASAN Samuel Holland 2024-08-01 10:26 ` Alexandre Ghiti 2024-08-14 9:06 ` Samuel Holland 2024-08-01 3:37 ` [PATCH 2/2] riscv: Enable bitops instrumentation Samuel Holland 2024-08-01 10:31 ` Alexandre Ghiti 2024-09-20 8:20 ` [PATCH 0/2] riscv: Improve KASAN coverage to fix unit tests patchwork-bot+linux-riscv
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox