* [PATCH 11/82] arm64: atomics: lse: Silence intentional wrapping addition
[not found] <20240122235208.work.748-kees@kernel.org>
@ 2024-01-23 0:26 ` Kees Cook
2024-01-23 9:53 ` Mark Rutland
2024-01-23 0:26 ` [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation Kees Cook
` (3 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Kees Cook @ 2024-01-23 0:26 UTC (permalink / raw)
To: linux-hardening
Cc: Kees Cook, Will Deacon, Peter Zijlstra, Boqun Feng, Mark Rutland,
Catalin Marinas, linux-arm-kernel, Gustavo A. R. Silva,
Bill Wendling, Justin Stitt, linux-kernel
Annotate atomic_add_return() and atomic_sub_return() to avoid signed
overflow instrumentation. They are expected to wrap around.
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
arch/arm64/include/asm/atomic_lse.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 87f568a94e55..30572458d702 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -79,13 +79,13 @@ ATOMIC_FETCH_OP_SUB( )
#undef ATOMIC_FETCH_OP_SUB
#define ATOMIC_OP_ADD_SUB_RETURN(name) \
-static __always_inline int \
+static __always_inline __signed_wrap int \
__lse_atomic_add_return##name(int i, atomic_t *v) \
{ \
return __lse_atomic_fetch_add##name(i, v) + i; \
} \
\
-static __always_inline int \
+static __always_inline __signed_wrap int \
__lse_atomic_sub_return##name(int i, atomic_t *v) \
{ \
return __lse_atomic_fetch_sub(i, v) - i; \
@@ -186,13 +186,13 @@ ATOMIC64_FETCH_OP_SUB( )
#undef ATOMIC64_FETCH_OP_SUB
#define ATOMIC64_OP_ADD_SUB_RETURN(name) \
-static __always_inline long \
+static __always_inline __signed_wrap long \
__lse_atomic64_add_return##name(s64 i, atomic64_t *v) \
{ \
return __lse_atomic64_fetch_add##name(i, v) + i; \
} \
\
-static __always_inline long \
+static __always_inline __signed_wrap long \
__lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
{ \
return __lse_atomic64_fetch_sub##name(i, v) - i; \
--
2.34.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation
[not found] <20240122235208.work.748-kees@kernel.org>
2024-01-23 0:26 ` [PATCH 11/82] arm64: atomics: lse: Silence intentional wrapping addition Kees Cook
@ 2024-01-23 0:26 ` Kees Cook
2024-01-23 10:49 ` Marc Zyngier
2024-01-23 0:27 ` [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test Kees Cook
` (2 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Kees Cook @ 2024-01-23 0:26 UTC (permalink / raw)
To: linux-hardening
Cc: Kees Cook, Marc Zyngier, Oliver Upton, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Reiji Watanabe, Eric Auger, Ricardo Koller,
Raghavendra Rao Ananta, Quentin Perret, Jean-Philippe Brucker,
linux-arm-kernel, kvmarm, Gustavo A. R. Silva, Bill Wendling,
Justin Stitt, linux-kernel
In an effort to separate intentional arithmetic wrap-around from
unexpected wrap-around, we need to refactor places that depend on this
kind of math. One of the most common code patterns of this is:
VAR + value < VAR
Notably, this is considered "undefined behavior" for signed and pointer
types, which the kernel works around by using the -fno-strict-overflow
option in the build[1] (which used to just be -fwrapv). Regardless, we
want to get the kernel source to the position where we can meaningfully
instrument arithmetic wrap-around conditions and catch them when they
are unexpected, regardless of whether they are signed[2], unsigned[3],
or pointer[4] types.
Refactor open-coded unsigned wrap-around addition test to use
check_add_overflow(), retaining the result for later usage (which removes
the redundant open-coded addition). This paves the way to enabling the
wrap-around sanitizers in the future.
Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
Link: https://github.com/KSPP/linux/issues/26 [2]
Link: https://github.com/KSPP/linux/issues/27 [3]
Link: https://github.com/KSPP/linux/issues/344 [4]
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Zenghui Yu <yuzenghui@huawei.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Reiji Watanabe <reijiw@google.com>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: Ricardo Koller <ricarkol@google.com>
Cc: Raghavendra Rao Ananta <rananta@google.com>
Cc: Quentin Perret <qperret@google.com>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.linux.dev
Signed-off-by: Kees Cook <keescook@chromium.org>
---
arch/arm64/kvm/vgic/vgic-kvm-device.c | 6 ++++--
arch/arm64/kvm/vgic/vgic-v2.c | 10 ++++++----
2 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
index f48b8dab8b3d..0eec5344d203 100644
--- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
+++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
@@ -18,17 +18,19 @@ int vgic_check_iorange(struct kvm *kvm, phys_addr_t ioaddr,
phys_addr_t addr, phys_addr_t alignment,
phys_addr_t size)
{
+ phys_addr_t sum;
+
if (!IS_VGIC_ADDR_UNDEF(ioaddr))
return -EEXIST;
if (!IS_ALIGNED(addr, alignment) || !IS_ALIGNED(size, alignment))
return -EINVAL;
- if (addr + size < addr)
+ if (check_add_overflow(addr, size, &sum))
return -EINVAL;
if (addr & ~kvm_phys_mask(&kvm->arch.mmu) ||
- (addr + size) > kvm_phys_size(&kvm->arch.mmu))
+ sum > kvm_phys_size(&kvm->arch.mmu))
return -E2BIG;
return 0;
diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
index 7e9cdb78f7ce..c8d1e965d3b7 100644
--- a/arch/arm64/kvm/vgic/vgic-v2.c
+++ b/arch/arm64/kvm/vgic/vgic-v2.c
@@ -273,14 +273,16 @@ void vgic_v2_enable(struct kvm_vcpu *vcpu)
/* check for overlapping regions and for regions crossing the end of memory */
static bool vgic_v2_check_base(gpa_t dist_base, gpa_t cpu_base)
{
- if (dist_base + KVM_VGIC_V2_DIST_SIZE < dist_base)
+ gpa_t dist_sum, cpu_sum;
+
+ if (check_add_overflow(dist_base, KVM_VGIC_V2_DIST_SIZE, &dist_sum))
return false;
- if (cpu_base + KVM_VGIC_V2_CPU_SIZE < cpu_base)
+ if (check_add_overflow(cpu_base, KVM_VGIC_V2_CPU_SIZE, &cpu_sum))
return false;
- if (dist_base + KVM_VGIC_V2_DIST_SIZE <= cpu_base)
+ if (dist_sum <= cpu_base)
return true;
- if (cpu_base + KVM_VGIC_V2_CPU_SIZE <= dist_base)
+ if (cpu_sum <= dist_base)
return true;
return false;
--
2.34.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test
[not found] <20240122235208.work.748-kees@kernel.org>
2024-01-23 0:26 ` [PATCH 11/82] arm64: atomics: lse: Silence intentional wrapping addition Kees Cook
2024-01-23 0:26 ` [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation Kees Cook
@ 2024-01-23 0:27 ` Kees Cook
2024-01-23 9:56 ` Mark Rutland
2024-01-23 0:27 ` [PATCH 40/82] arm64: stacktrace: " Kees Cook
2024-01-23 0:27 ` [PATCH 57/82] KVM: arm64: vgic-v3: " Kees Cook
4 siblings, 1 reply; 13+ messages in thread
From: Kees Cook @ 2024-01-23 0:27 UTC (permalink / raw)
To: linux-hardening
Cc: Kees Cook, Russell King, linux-arm-kernel, Gustavo A. R. Silva,
Bill Wendling, Justin Stitt, linux-kernel
In an effort to separate intentional arithmetic wrap-around from
unexpected wrap-around, we need to refactor places that depend on this
kind of math. One of the most common code patterns of this is:
VAR + value < VAR
Notably, this is considered "undefined behavior" for signed and pointer
types, which the kernel works around by using the -fno-strict-overflow
option in the build[1] (which used to just be -fwrapv). Regardless, we
want to get the kernel source to the position where we can meaningfully
instrument arithmetic wrap-around conditions and catch them when they
are unexpected, regardless of whether they are signed[2], unsigned[3],
or pointer[4] types.
Refactor open-coded wrap-around addition test to use add_would_overflow().
This paves the way to enabling the wrap-around sanitizers in the future.
Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
Link: https://github.com/KSPP/linux/issues/26 [2]
Link: https://github.com/KSPP/linux/issues/27 [3]
Link: https://github.com/KSPP/linux/issues/344 [4]
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
arch/arm/nwfpe/softfloat.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/nwfpe/softfloat.c b/arch/arm/nwfpe/softfloat.c
index ffa6b438786b..0635b1eda1d3 100644
--- a/arch/arm/nwfpe/softfloat.c
+++ b/arch/arm/nwfpe/softfloat.c
@@ -603,7 +603,7 @@ static floatx80
roundBits = zSig0 & roundMask;
if ( 0x7FFD <= (bits32) ( zExp - 1 ) ) {
if ( ( 0x7FFE < zExp )
- || ( ( zExp == 0x7FFE ) && ( zSig0 + roundIncrement < zSig0 ) )
+ || ( ( zExp == 0x7FFE ) && (add_would_overflow(zSig0, roundIncrement)) )
) {
goto overflow;
}
--
2.34.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 40/82] arm64: stacktrace: Refactor intentional wrap-around test
[not found] <20240122235208.work.748-kees@kernel.org>
` (2 preceding siblings ...)
2024-01-23 0:27 ` [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test Kees Cook
@ 2024-01-23 0:27 ` Kees Cook
2024-01-23 9:58 ` Mark Rutland
2024-01-23 0:27 ` [PATCH 57/82] KVM: arm64: vgic-v3: " Kees Cook
4 siblings, 1 reply; 13+ messages in thread
From: Kees Cook @ 2024-01-23 0:27 UTC (permalink / raw)
To: linux-hardening
Cc: Kees Cook, Catalin Marinas, Will Deacon, Kalesh Singh, Fuad Tabba,
Mark Brown, Madhavan T. Venkataraman, Marc Zyngier, Mark Rutland,
linux-arm-kernel, Gustavo A. R. Silva, Bill Wendling,
Justin Stitt, linux-kernel
In an effort to separate intentional arithmetic wrap-around from
unexpected wrap-around, we need to refactor places that depend on this
kind of math. One of the most common code patterns of this is:
VAR + value < VAR
Notably, this is considered "undefined behavior" for signed and pointer
types, which the kernel works around by using the -fno-strict-overflow
option in the build[1] (which used to just be -fwrapv). Regardless, we
want to get the kernel source to the position where we can meaningfully
instrument arithmetic wrap-around conditions and catch them when they
are unexpected, regardless of whether they are signed[2], unsigned[3],
or pointer[4] types.
Refactor open-coded wrap-around addition test to use add_would_overflow().
This paves the way to enabling the wrap-around sanitizers in the future.
Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
Link: https://github.com/KSPP/linux/issues/26 [2]
Link: https://github.com/KSPP/linux/issues/27 [3]
Link: https://github.com/KSPP/linux/issues/344 [4]
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
arch/arm64/include/asm/stacktrace/common.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
index f63dc654e545..6e0cb84961f8 100644
--- a/arch/arm64/include/asm/stacktrace/common.h
+++ b/arch/arm64/include/asm/stacktrace/common.h
@@ -49,7 +49,7 @@ static inline bool stackinfo_on_stack(const struct stack_info *info,
if (!info->low)
return false;
- if (sp < info->low || sp + size < sp || sp + size > info->high)
+ if (sp < info->low || add_would_overflow(sp, size) || sp + size > info->high)
return false;
return true;
--
2.34.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 57/82] KVM: arm64: vgic-v3: Refactor intentional wrap-around test
[not found] <20240122235208.work.748-kees@kernel.org>
` (3 preceding siblings ...)
2024-01-23 0:27 ` [PATCH 40/82] arm64: stacktrace: " Kees Cook
@ 2024-01-23 0:27 ` Kees Cook
2024-01-23 10:50 ` Marc Zyngier
2024-01-24 15:12 ` Eric Auger
4 siblings, 2 replies; 13+ messages in thread
From: Kees Cook @ 2024-01-23 0:27 UTC (permalink / raw)
To: linux-hardening
Cc: Kees Cook, Marc Zyngier, Oliver Upton, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Eric Auger, linux-arm-kernel, kvmarm, Gustavo A. R. Silva,
Bill Wendling, Justin Stitt, linux-kernel
In an effort to separate intentional arithmetic wrap-around from
unexpected wrap-around, we need to refactor places that depend on this
kind of math. One of the most common code patterns of this is:
VAR + value < VAR
Notably, this is considered "undefined behavior" for signed and pointer
types, which the kernel works around by using the -fno-strict-overflow
option in the build[1] (which used to just be -fwrapv). Regardless, we
want to get the kernel source to the position where we can meaningfully
instrument arithmetic wrap-around conditions and catch them when they
are unexpected, regardless of whether they are signed[2], unsigned[3],
or pointer[4] types.
Refactor open-coded wrap-around addition test to use add_would_overflow().
This paves the way to enabling the wrap-around sanitizers in the future.
Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
Link: https://github.com/KSPP/linux/issues/26 [2]
Link: https://github.com/KSPP/linux/issues/27 [3]
Link: https://github.com/KSPP/linux/issues/344 [4]
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Zenghui Yu <yuzenghui@huawei.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: kvmarm@lists.linux.dev
Signed-off-by: Kees Cook <keescook@chromium.org>
---
arch/arm64/kvm/vgic/vgic-mmio-v3.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
index c15ee1df036a..860b774c0c13 100644
--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
@@ -863,7 +863,7 @@ static int vgic_v3_alloc_redist_region(struct kvm *kvm, uint32_t index,
int ret;
/* cross the end of memory ? */
- if (base + size < base)
+ if (add_would_overflow(base, size))
return -EINVAL;
if (list_empty(rd_regions)) {
--
2.34.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 11/82] arm64: atomics: lse: Silence intentional wrapping addition
2024-01-23 0:26 ` [PATCH 11/82] arm64: atomics: lse: Silence intentional wrapping addition Kees Cook
@ 2024-01-23 9:53 ` Mark Rutland
0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2024-01-23 9:53 UTC (permalink / raw)
To: Kees Cook
Cc: linux-hardening, Will Deacon, Peter Zijlstra, Boqun Feng,
Catalin Marinas, linux-arm-kernel, Gustavo A. R. Silva,
Bill Wendling, Justin Stitt, linux-kernel
On Mon, Jan 22, 2024 at 04:26:46PM -0800, Kees Cook wrote:
> Annotate atomic_add_return() and atomic_sub_return() to avoid signed
> overflow instrumentation. They are expected to wrap around.
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> arch/arm64/include/asm/atomic_lse.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
> index 87f568a94e55..30572458d702 100644
> --- a/arch/arm64/include/asm/atomic_lse.h
> +++ b/arch/arm64/include/asm/atomic_lse.h
> @@ -79,13 +79,13 @@ ATOMIC_FETCH_OP_SUB( )
> #undef ATOMIC_FETCH_OP_SUB
>
> #define ATOMIC_OP_ADD_SUB_RETURN(name) \
> -static __always_inline int \
> +static __always_inline __signed_wrap int \
> __lse_atomic_add_return##name(int i, atomic_t *v) \
> { \
> return __lse_atomic_fetch_add##name(i, v) + i; \
> } \
I'd strongly prefer using add_wrap() rather than annotating the function, i.e.
make this:
static __always_inline int \
__lse_atomic_add_return##name(int i, atomic_t *v) \
{ \
return add_wrap(__lse_atomic_fetch_add##name(i, v), i); \
} \
Likewise for the other instances below.
With that, this looks fine to me.
Mark.
> \
> -static __always_inline int \
> +static __always_inline __signed_wrap int \
> __lse_atomic_sub_return##name(int i, atomic_t *v) \
> { \
> return __lse_atomic_fetch_sub(i, v) - i; \
> @@ -186,13 +186,13 @@ ATOMIC64_FETCH_OP_SUB( )
> #undef ATOMIC64_FETCH_OP_SUB
>
> #define ATOMIC64_OP_ADD_SUB_RETURN(name) \
> -static __always_inline long \
> +static __always_inline __signed_wrap long \
> __lse_atomic64_add_return##name(s64 i, atomic64_t *v) \
> { \
> return __lse_atomic64_fetch_add##name(i, v) + i; \
> } \
> \
> -static __always_inline long \
> +static __always_inline __signed_wrap long \
> __lse_atomic64_sub_return##name(s64 i, atomic64_t *v) \
> { \
> return __lse_atomic64_fetch_sub##name(i, v) - i; \
> --
> 2.34.1
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test
2024-01-23 0:27 ` [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test Kees Cook
@ 2024-01-23 9:56 ` Mark Rutland
2024-01-23 22:41 ` Kees Cook
0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2024-01-23 9:56 UTC (permalink / raw)
To: Kees Cook
Cc: linux-hardening, Russell King, linux-arm-kernel,
Gustavo A. R. Silva, Bill Wendling, Justin Stitt, linux-kernel
The commit title is odd here; '3117/1' is the patch tracker name for the last
patch. The title should probably be:
arm: nwfpe: Refactor intentional wrap-around test
Mark.
On Mon, Jan 22, 2024 at 04:27:13PM -0800, Kees Cook wrote:
> In an effort to separate intentional arithmetic wrap-around from
> unexpected wrap-around, we need to refactor places that depend on this
> kind of math. One of the most common code patterns of this is:
>
> VAR + value < VAR
>
> Notably, this is considered "undefined behavior" for signed and pointer
> types, which the kernel works around by using the -fno-strict-overflow
> option in the build[1] (which used to just be -fwrapv). Regardless, we
> want to get the kernel source to the position where we can meaningfully
> instrument arithmetic wrap-around conditions and catch them when they
> are unexpected, regardless of whether they are signed[2], unsigned[3],
> or pointer[4] types.
>
> Refactor open-coded wrap-around addition test to use add_would_overflow().
> This paves the way to enabling the wrap-around sanitizers in the future.
>
> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> Link: https://github.com/KSPP/linux/issues/26 [2]
> Link: https://github.com/KSPP/linux/issues/27 [3]
> Link: https://github.com/KSPP/linux/issues/344 [4]
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> arch/arm/nwfpe/softfloat.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm/nwfpe/softfloat.c b/arch/arm/nwfpe/softfloat.c
> index ffa6b438786b..0635b1eda1d3 100644
> --- a/arch/arm/nwfpe/softfloat.c
> +++ b/arch/arm/nwfpe/softfloat.c
> @@ -603,7 +603,7 @@ static floatx80
> roundBits = zSig0 & roundMask;
> if ( 0x7FFD <= (bits32) ( zExp - 1 ) ) {
> if ( ( 0x7FFE < zExp )
> - || ( ( zExp == 0x7FFE ) && ( zSig0 + roundIncrement < zSig0 ) )
> + || ( ( zExp == 0x7FFE ) && (add_would_overflow(zSig0, roundIncrement)) )
> ) {
> goto overflow;
> }
> --
> 2.34.1
>
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 40/82] arm64: stacktrace: Refactor intentional wrap-around test
2024-01-23 0:27 ` [PATCH 40/82] arm64: stacktrace: " Kees Cook
@ 2024-01-23 9:58 ` Mark Rutland
0 siblings, 0 replies; 13+ messages in thread
From: Mark Rutland @ 2024-01-23 9:58 UTC (permalink / raw)
To: Kees Cook
Cc: linux-hardening, Catalin Marinas, Will Deacon, Kalesh Singh,
Fuad Tabba, Mark Brown, Madhavan T. Venkataraman, Marc Zyngier,
linux-arm-kernel, Gustavo A. R. Silva, Bill Wendling,
Justin Stitt, linux-kernel
On Mon, Jan 22, 2024 at 04:27:15PM -0800, Kees Cook wrote:
> In an effort to separate intentional arithmetic wrap-around from
> unexpected wrap-around, we need to refactor places that depend on this
> kind of math. One of the most common code patterns of this is:
>
> VAR + value < VAR
>
> Notably, this is considered "undefined behavior" for signed and pointer
> types, which the kernel works around by using the -fno-strict-overflow
> option in the build[1] (which used to just be -fwrapv). Regardless, we
> want to get the kernel source to the position where we can meaningfully
> instrument arithmetic wrap-around conditions and catch them when they
> are unexpected, regardless of whether they are signed[2], unsigned[3],
> or pointer[4] types.
>
> Refactor open-coded wrap-around addition test to use add_would_overflow().
> This paves the way to enabling the wrap-around sanitizers in the future.
>
> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> Link: https://github.com/KSPP/linux/issues/26 [2]
> Link: https://github.com/KSPP/linux/issues/27 [3]
> Link: https://github.com/KSPP/linux/issues/344 [4]
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Kalesh Singh <kaleshsingh@google.com>
> Cc: Fuad Tabba <tabba@google.com>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> arch/arm64/include/asm/stacktrace/common.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
> index f63dc654e545..6e0cb84961f8 100644
> --- a/arch/arm64/include/asm/stacktrace/common.h
> +++ b/arch/arm64/include/asm/stacktrace/common.h
> @@ -49,7 +49,7 @@ static inline bool stackinfo_on_stack(const struct stack_info *info,
> if (!info->low)
> return false;
>
> - if (sp < info->low || sp + size < sp || sp + size > info->high)
> + if (sp < info->low || add_would_overflow(sp, size) || sp + size > info->high)
> return false;
This looks fine to me, so FWIW:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Mark.
>
> return true;
> --
> 2.34.1
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation
2024-01-23 0:26 ` [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation Kees Cook
@ 2024-01-23 10:49 ` Marc Zyngier
2024-01-24 15:13 ` Eric Auger
0 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2024-01-23 10:49 UTC (permalink / raw)
To: Kees Cook
Cc: linux-hardening, Oliver Upton, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Reiji Watanabe,
Eric Auger, Ricardo Koller, Raghavendra Rao Ananta,
Quentin Perret, Jean-Philippe Brucker, linux-arm-kernel, kvmarm,
Gustavo A. R. Silva, Bill Wendling, Justin Stitt, linux-kernel
On Tue, 23 Jan 2024 00:26:59 +0000,
Kees Cook <keescook@chromium.org> wrote:
>
> In an effort to separate intentional arithmetic wrap-around from
> unexpected wrap-around, we need to refactor places that depend on this
> kind of math. One of the most common code patterns of this is:
>
> VAR + value < VAR
>
> Notably, this is considered "undefined behavior" for signed and pointer
> types, which the kernel works around by using the -fno-strict-overflow
> option in the build[1] (which used to just be -fwrapv). Regardless, we
> want to get the kernel source to the position where we can meaningfully
> instrument arithmetic wrap-around conditions and catch them when they
> are unexpected, regardless of whether they are signed[2], unsigned[3],
> or pointer[4] types.
>
> Refactor open-coded unsigned wrap-around addition test to use
> check_add_overflow(), retaining the result for later usage (which removes
> the redundant open-coded addition). This paves the way to enabling the
> wrap-around sanitizers in the future.
>
> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> Link: https://github.com/KSPP/linux/issues/26 [2]
> Link: https://github.com/KSPP/linux/issues/27 [3]
> Link: https://github.com/KSPP/linux/issues/344 [4]
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Oliver Upton <oliver.upton@linux.dev>
> Cc: James Morse <james.morse@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Zenghui Yu <yuzenghui@huawei.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Reiji Watanabe <reijiw@google.com>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: Ricardo Koller <ricarkol@google.com>
> Cc: Raghavendra Rao Ananta <rananta@google.com>
> Cc: Quentin Perret <qperret@google.com>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: kvmarm@lists.linux.dev
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> arch/arm64/kvm/vgic/vgic-kvm-device.c | 6 ++++--
> arch/arm64/kvm/vgic/vgic-v2.c | 10 ++++++----
> 2 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
> index f48b8dab8b3d..0eec5344d203 100644
> --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
> +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
> @@ -18,17 +18,19 @@ int vgic_check_iorange(struct kvm *kvm, phys_addr_t ioaddr,
> phys_addr_t addr, phys_addr_t alignment,
> phys_addr_t size)
> {
> + phys_addr_t sum;
> +
> if (!IS_VGIC_ADDR_UNDEF(ioaddr))
> return -EEXIST;
>
> if (!IS_ALIGNED(addr, alignment) || !IS_ALIGNED(size, alignment))
> return -EINVAL;
>
> - if (addr + size < addr)
> + if (check_add_overflow(addr, size, &sum))
> return -EINVAL;
>
> if (addr & ~kvm_phys_mask(&kvm->arch.mmu) ||
> - (addr + size) > kvm_phys_size(&kvm->arch.mmu))
> + sum > kvm_phys_size(&kvm->arch.mmu))
nit: 'sum' doesn't mean much in this context. Something like 'end'
would be much more descriptive.
> return -E2BIG;
>
> return 0;
> diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> index 7e9cdb78f7ce..c8d1e965d3b7 100644
> --- a/arch/arm64/kvm/vgic/vgic-v2.c
> +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> @@ -273,14 +273,16 @@ void vgic_v2_enable(struct kvm_vcpu *vcpu)
> /* check for overlapping regions and for regions crossing the end of memory */
> static bool vgic_v2_check_base(gpa_t dist_base, gpa_t cpu_base)
> {
> - if (dist_base + KVM_VGIC_V2_DIST_SIZE < dist_base)
> + gpa_t dist_sum, cpu_sum;
Same here: dist_end, cpu_end.
> +
> + if (check_add_overflow(dist_base, KVM_VGIC_V2_DIST_SIZE, &dist_sum))
> return false;
> - if (cpu_base + KVM_VGIC_V2_CPU_SIZE < cpu_base)
> + if (check_add_overflow(cpu_base, KVM_VGIC_V2_CPU_SIZE, &cpu_sum))
> return false;
>
> - if (dist_base + KVM_VGIC_V2_DIST_SIZE <= cpu_base)
> + if (dist_sum <= cpu_base)
> return true;
> - if (cpu_base + KVM_VGIC_V2_CPU_SIZE <= dist_base)
> + if (cpu_sum <= dist_base)
> return true;
>
> return false;
With these nits addressed, and assuming you intend to merge the whole
series yourself:
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 57/82] KVM: arm64: vgic-v3: Refactor intentional wrap-around test
2024-01-23 0:27 ` [PATCH 57/82] KVM: arm64: vgic-v3: " Kees Cook
@ 2024-01-23 10:50 ` Marc Zyngier
2024-01-24 15:12 ` Eric Auger
1 sibling, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2024-01-23 10:50 UTC (permalink / raw)
To: Kees Cook
Cc: linux-hardening, Oliver Upton, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Eric Auger,
linux-arm-kernel, kvmarm, Gustavo A. R. Silva, Bill Wendling,
Justin Stitt, linux-kernel
On Tue, 23 Jan 2024 00:27:32 +0000,
Kees Cook <keescook@chromium.org> wrote:
>
> In an effort to separate intentional arithmetic wrap-around from
> unexpected wrap-around, we need to refactor places that depend on this
> kind of math. One of the most common code patterns of this is:
>
> VAR + value < VAR
>
> Notably, this is considered "undefined behavior" for signed and pointer
> types, which the kernel works around by using the -fno-strict-overflow
> option in the build[1] (which used to just be -fwrapv). Regardless, we
> want to get the kernel source to the position where we can meaningfully
> instrument arithmetic wrap-around conditions and catch them when they
> are unexpected, regardless of whether they are signed[2], unsigned[3],
> or pointer[4] types.
>
> Refactor open-coded wrap-around addition test to use add_would_overflow().
> This paves the way to enabling the wrap-around sanitizers in the future.
>
> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> Link: https://github.com/KSPP/linux/issues/26 [2]
> Link: https://github.com/KSPP/linux/issues/27 [3]
> Link: https://github.com/KSPP/linux/issues/344 [4]
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Oliver Upton <oliver.upton@linux.dev>
> Cc: James Morse <james.morse@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Zenghui Yu <yuzenghui@huawei.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: kvmarm@lists.linux.dev
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> arch/arm64/kvm/vgic/vgic-mmio-v3.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> index c15ee1df036a..860b774c0c13 100644
> --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> @@ -863,7 +863,7 @@ static int vgic_v3_alloc_redist_region(struct kvm *kvm, uint32_t index,
> int ret;
>
> /* cross the end of memory ? */
> - if (base + size < base)
> + if (add_would_overflow(base, size))
> return -EINVAL;
>
> if (list_empty(rd_regions)) {
Acked-by: Marc Zyngier <maz@kernel.org>
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test
2024-01-23 9:56 ` Mark Rutland
@ 2024-01-23 22:41 ` Kees Cook
0 siblings, 0 replies; 13+ messages in thread
From: Kees Cook @ 2024-01-23 22:41 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-hardening, Russell King, linux-arm-kernel,
Gustavo A. R. Silva, Bill Wendling, Justin Stitt, linux-kernel
On Tue, Jan 23, 2024 at 09:56:22AM +0000, Mark Rutland wrote:
> The commit title is odd here; '3117/1' is the patch tracker name for the last
> patch. The title should probably be:
>
> arm: nwfpe: Refactor intentional wrap-around test
Whoops, yes. I need to teach my prefix-guessing script to drop the ARM
patch tracker numbers...
-Kees
>
> Mark.
>
> On Mon, Jan 22, 2024 at 04:27:13PM -0800, Kees Cook wrote:
> > In an effort to separate intentional arithmetic wrap-around from
> > unexpected wrap-around, we need to refactor places that depend on this
> > kind of math. One of the most common code patterns of this is:
> >
> > VAR + value < VAR
> >
> > Notably, this is considered "undefined behavior" for signed and pointer
> > types, which the kernel works around by using the -fno-strict-overflow
> > option in the build[1] (which used to just be -fwrapv). Regardless, we
> > want to get the kernel source to the position where we can meaningfully
> > instrument arithmetic wrap-around conditions and catch them when they
> > are unexpected, regardless of whether they are signed[2], unsigned[3],
> > or pointer[4] types.
> >
> > Refactor open-coded wrap-around addition test to use add_would_overflow().
> > This paves the way to enabling the wrap-around sanitizers in the future.
> >
> > Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> > Link: https://github.com/KSPP/linux/issues/26 [2]
> > Link: https://github.com/KSPP/linux/issues/27 [3]
> > Link: https://github.com/KSPP/linux/issues/344 [4]
> > Cc: Russell King <linux@armlinux.org.uk>
> > Cc: linux-arm-kernel@lists.infradead.org
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> > ---
> > arch/arm/nwfpe/softfloat.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm/nwfpe/softfloat.c b/arch/arm/nwfpe/softfloat.c
> > index ffa6b438786b..0635b1eda1d3 100644
> > --- a/arch/arm/nwfpe/softfloat.c
> > +++ b/arch/arm/nwfpe/softfloat.c
> > @@ -603,7 +603,7 @@ static floatx80
> > roundBits = zSig0 & roundMask;
> > if ( 0x7FFD <= (bits32) ( zExp - 1 ) ) {
> > if ( ( 0x7FFE < zExp )
> > - || ( ( zExp == 0x7FFE ) && ( zSig0 + roundIncrement < zSig0 ) )
> > + || ( ( zExp == 0x7FFE ) && (add_would_overflow(zSig0, roundIncrement)) )
> > ) {
> > goto overflow;
> > }
> > --
> > 2.34.1
> >
> >
--
Kees Cook
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 57/82] KVM: arm64: vgic-v3: Refactor intentional wrap-around test
2024-01-23 0:27 ` [PATCH 57/82] KVM: arm64: vgic-v3: " Kees Cook
2024-01-23 10:50 ` Marc Zyngier
@ 2024-01-24 15:12 ` Eric Auger
1 sibling, 0 replies; 13+ messages in thread
From: Eric Auger @ 2024-01-24 15:12 UTC (permalink / raw)
To: Kees Cook, linux-hardening
Cc: Marc Zyngier, Oliver Upton, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, linux-arm-kernel,
kvmarm, Gustavo A. R. Silva, Bill Wendling, Justin Stitt,
linux-kernel
On 1/23/24 01:27, Kees Cook wrote:
> In an effort to separate intentional arithmetic wrap-around from
> unexpected wrap-around, we need to refactor places that depend on this
> kind of math. One of the most common code patterns of this is:
>
> VAR + value < VAR
>
> Notably, this is considered "undefined behavior" for signed and pointer
> types, which the kernel works around by using the -fno-strict-overflow
> option in the build[1] (which used to just be -fwrapv). Regardless, we
> want to get the kernel source to the position where we can meaningfully
> instrument arithmetic wrap-around conditions and catch them when they
> are unexpected, regardless of whether they are signed[2], unsigned[3],
> or pointer[4] types.
>
> Refactor open-coded wrap-around addition test to use add_would_overflow().
> This paves the way to enabling the wrap-around sanitizers in the future.
>
> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
> Link: https://github.com/KSPP/linux/issues/26 [2]
> Link: https://github.com/KSPP/linux/issues/27 [3]
> Link: https://github.com/KSPP/linux/issues/344 [4]
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Oliver Upton <oliver.upton@linux.dev>
> Cc: James Morse <james.morse@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Zenghui Yu <yuzenghui@huawei.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Eric Auger <eric.auger@redhat.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: kvmarm@lists.linux.dev
> Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
> ---
> arch/arm64/kvm/vgic/vgic-mmio-v3.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> index c15ee1df036a..860b774c0c13 100644
> --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> @@ -863,7 +863,7 @@ static int vgic_v3_alloc_redist_region(struct kvm *kvm, uint32_t index,
> int ret;
>
> /* cross the end of memory ? */
> - if (base + size < base)
> + if (add_would_overflow(base, size))
> return -EINVAL;
>
> if (list_empty(rd_regions)) {
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation
2024-01-23 10:49 ` Marc Zyngier
@ 2024-01-24 15:13 ` Eric Auger
0 siblings, 0 replies; 13+ messages in thread
From: Eric Auger @ 2024-01-24 15:13 UTC (permalink / raw)
To: Marc Zyngier, Kees Cook
Cc: linux-hardening, Oliver Upton, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Reiji Watanabe,
Ricardo Koller, Raghavendra Rao Ananta, Quentin Perret,
Jean-Philippe Brucker, linux-arm-kernel, kvmarm,
Gustavo A. R. Silva, Bill Wendling, Justin Stitt, linux-kernel
On 1/23/24 11:49, Marc Zyngier wrote:
> On Tue, 23 Jan 2024 00:26:59 +0000,
> Kees Cook <keescook@chromium.org> wrote:
>> In an effort to separate intentional arithmetic wrap-around from
>> unexpected wrap-around, we need to refactor places that depend on this
>> kind of math. One of the most common code patterns of this is:
>>
>> VAR + value < VAR
>>
>> Notably, this is considered "undefined behavior" for signed and pointer
>> types, which the kernel works around by using the -fno-strict-overflow
>> option in the build[1] (which used to just be -fwrapv). Regardless, we
>> want to get the kernel source to the position where we can meaningfully
>> instrument arithmetic wrap-around conditions and catch them when they
>> are unexpected, regardless of whether they are signed[2], unsigned[3],
>> or pointer[4] types.
>>
>> Refactor open-coded unsigned wrap-around addition test to use
>> check_add_overflow(), retaining the result for later usage (which removes
>> the redundant open-coded addition). This paves the way to enabling the
>> wrap-around sanitizers in the future.
>>
>> Link: https://git.kernel.org/linus/68df3755e383e6fecf2354a67b08f92f18536594 [1]
>> Link: https://github.com/KSPP/linux/issues/26 [2]
>> Link: https://github.com/KSPP/linux/issues/27 [3]
>> Link: https://github.com/KSPP/linux/issues/344 [4]
>> Cc: Marc Zyngier <maz@kernel.org>
>> Cc: Oliver Upton <oliver.upton@linux.dev>
>> Cc: James Morse <james.morse@arm.com>
>> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
>> Cc: Zenghui Yu <yuzenghui@huawei.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Reiji Watanabe <reijiw@google.com>
>> Cc: Eric Auger <eric.auger@redhat.com>
>> Cc: Ricardo Koller <ricarkol@google.com>
>> Cc: Raghavendra Rao Ananta <rananta@google.com>
>> Cc: Quentin Perret <qperret@google.com>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: kvmarm@lists.linux.dev
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>> arch/arm64/kvm/vgic/vgic-kvm-device.c | 6 ++++--
>> arch/arm64/kvm/vgic/vgic-v2.c | 10 ++++++----
>> 2 files changed, 10 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vgic-kvm-device.c
>> index f48b8dab8b3d..0eec5344d203 100644
>> --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c
>> +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c
>> @@ -18,17 +18,19 @@ int vgic_check_iorange(struct kvm *kvm, phys_addr_t ioaddr,
>> phys_addr_t addr, phys_addr_t alignment,
>> phys_addr_t size)
>> {
>> + phys_addr_t sum;
>> +
>> if (!IS_VGIC_ADDR_UNDEF(ioaddr))
>> return -EEXIST;
>>
>> if (!IS_ALIGNED(addr, alignment) || !IS_ALIGNED(size, alignment))
>> return -EINVAL;
>>
>> - if (addr + size < addr)
>> + if (check_add_overflow(addr, size, &sum))
>> return -EINVAL;
>>
>> if (addr & ~kvm_phys_mask(&kvm->arch.mmu) ||
>> - (addr + size) > kvm_phys_size(&kvm->arch.mmu))
>> + sum > kvm_phys_size(&kvm->arch.mmu))
> nit: 'sum' doesn't mean much in this context. Something like 'end'
> would be much more descriptive.
>
>> return -E2BIG;
>>
>> return 0;
>> diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
>> index 7e9cdb78f7ce..c8d1e965d3b7 100644
>> --- a/arch/arm64/kvm/vgic/vgic-v2.c
>> +++ b/arch/arm64/kvm/vgic/vgic-v2.c
>> @@ -273,14 +273,16 @@ void vgic_v2_enable(struct kvm_vcpu *vcpu)
>> /* check for overlapping regions and for regions crossing the end of memory */
>> static bool vgic_v2_check_base(gpa_t dist_base, gpa_t cpu_base)
>> {
>> - if (dist_base + KVM_VGIC_V2_DIST_SIZE < dist_base)
>> + gpa_t dist_sum, cpu_sum;
> Same here: dist_end, cpu_end.
I do agree.
>
>> +
>> + if (check_add_overflow(dist_base, KVM_VGIC_V2_DIST_SIZE, &dist_sum))
>> return false;
>> - if (cpu_base + KVM_VGIC_V2_CPU_SIZE < cpu_base)
>> + if (check_add_overflow(cpu_base, KVM_VGIC_V2_CPU_SIZE, &cpu_sum))
>> return false;
>>
>> - if (dist_base + KVM_VGIC_V2_DIST_SIZE <= cpu_base)
>> + if (dist_sum <= cpu_base)
>> return true;
>> - if (cpu_base + KVM_VGIC_V2_CPU_SIZE <= dist_base)
>> + if (cpu_sum <= dist_base)
>> return true;
>>
>> return false;
> With these nits addressed, and assuming you intend to merge the whole
> series yourself:
>
> Acked-by: Marc Zyngier <maz@kernel.org>
assuming above suggested changes,
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Eric
>
> M.
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-01-24 16:16 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20240122235208.work.748-kees@kernel.org>
2024-01-23 0:26 ` [PATCH 11/82] arm64: atomics: lse: Silence intentional wrapping addition Kees Cook
2024-01-23 9:53 ` Mark Rutland
2024-01-23 0:26 ` [PATCH 24/82] KVM: arm64: vgic: Refactor intentional wrap-around calculation Kees Cook
2024-01-23 10:49 ` Marc Zyngier
2024-01-24 15:13 ` Eric Auger
2024-01-23 0:27 ` [PATCH 38/82] arm: 3117/1: Refactor intentional wrap-around test Kees Cook
2024-01-23 9:56 ` Mark Rutland
2024-01-23 22:41 ` Kees Cook
2024-01-23 0:27 ` [PATCH 40/82] arm64: stacktrace: " Kees Cook
2024-01-23 9:58 ` Mark Rutland
2024-01-23 0:27 ` [PATCH 57/82] KVM: arm64: vgic-v3: " Kees Cook
2024-01-23 10:50 ` Marc Zyngier
2024-01-24 15:12 ` Eric Auger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).