Linux cryptographic layer development
 help / color / mirror / Atom feed
* [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication
@ 2026-05-13 10:57 Anastasia Tishchenko
  2026-05-13 12:39 ` Qingfang Deng
  2026-05-13 14:31 ` Lukas Wunner
  0 siblings, 2 replies; 4+ messages in thread
From: Anastasia Tishchenko @ 2026-05-13 10:57 UTC (permalink / raw)
  To: Lukas Wunner, Stefan Berger
  Cc: Ignat Korchagin, Herbert Xu, David S . Miller, linux-crypto,
	linux-kernel, Anastasia Tishchenko, stable

The carry flag calculation fails when r01.m_high is saturated
(0xFFFFFFFFFFFFFFFF) and addition of lower bits overflows.

The condition (r01.m_high < product.m_high) doesn't handle the case
where r01.m_high == product.m_high and an additional carry exists
from lower-bit overflow.

When commit 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
introduced crypto/ecc.c, it split the muladd() function in the
micro-ecc library into separate mul_64_64() and add_128_128() helpers.
It seems the check got lost in translation.

Add proper handling for this boundary by accounting for the carry
from the lower addition.

Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
Signed-off-by: Anastasia Tishchenko <sv3iry@gmail.com>
Cc: stable@vger.kernel.org # v4.8+
---
Changes v1 -> v2:
* Rename add_128_128() to check_add_128_128_overflow() and let it return a bool
  indicating whether an overflow occurred
* Rewrite an explicit if-else statement using constant-time bitwise arithmetic
  to avoid a timing side-channel

Link to v1:
https://lore.kernel.org/r/20260508114844.29694-1-sv3iry@gmail.com/
---
 crypto/ecc.c | 31 ++++++++++++++++++++-----------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/crypto/ecc.c b/crypto/ecc.c
index 43b0def3a225..6eb4d97a5f0d 100644
--- a/crypto/ecc.c
+++ b/crypto/ecc.c
@@ -393,14 +393,26 @@ static uint128_t mul_64_64(u64 left, u64 right)
 	return result;
 }
 
-static uint128_t add_128_128(uint128_t a, uint128_t b)
+/* Calculate addition with overflow checking. Returns true on wrap-around,
+ * false otherwise.
+ */
+static bool check_add_128_128_overflow(uint128_t *result, uint128_t a,
+				       uint128_t b)
 {
-	uint128_t result;
+	bool carry;
 
-	result.m_low = a.m_low + b.m_low;
-	result.m_high = a.m_high + b.m_high + (result.m_low < a.m_low);
+	result->m_low = a.m_low + b.m_low;
+	carry = (result->m_low < a.m_low);
 
-	return result;
+	result->m_high = a.m_high + b.m_high + carry;
+
+	/* Using constant-time bitwise arithmetic to prevent timing
+	 * side-channels.
+	 */
+	carry = (result->m_high < a.m_high) |
+		((result->m_high == a.m_high) & carry);
+
+	return carry;
 }
 
 static void vli_mult(u64 *result, const u64 *left, const u64 *right,
@@ -425,9 +437,7 @@ static void vli_mult(u64 *result, const u64 *left, const u64 *right,
 			uint128_t product;
 
 			product = mul_64_64(left[i], right[k - i]);
-
-			r01 = add_128_128(r01, product);
-			r2 += (r01.m_high < product.m_high);
+			r2 += check_add_128_128_overflow(&r01, r01, product);
 		}
 
 		result[k] = r01.m_low;
@@ -450,7 +460,7 @@ static void vli_umult(u64 *result, const u64 *left, u32 right,
 		uint128_t product;
 
 		product = mul_64_64(left[k], right);
-		r01 = add_128_128(r01, product);
+		check_add_128_128_overflow(&r01, r01, product);
 		/* no carry */
 		result[k] = r01.m_low;
 		r01.m_low = r01.m_high;
@@ -487,8 +497,7 @@ static void vli_square(u64 *result, const u64 *left, unsigned int ndigits)
 				product.m_low <<= 1;
 			}
 
-			r01 = add_128_128(r01, product);
-			r2 += (r01.m_high < product.m_high);
+			r2 += check_add_128_128_overflow(&r01, r01, product);
 		}
 
 		result[k] = r01.m_low;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication
  2026-05-13 10:57 [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication Anastasia Tishchenko
@ 2026-05-13 12:39 ` Qingfang Deng
  2026-05-13 14:09   ` Lukas Wunner
  2026-05-13 14:31 ` Lukas Wunner
  1 sibling, 1 reply; 4+ messages in thread
From: Qingfang Deng @ 2026-05-13 12:39 UTC (permalink / raw)
  To: Anastasia Tishchenko
  Cc: Lukas Wunner, Stefan Berger, Ignat Korchagin, Herbert Xu,
	David S. Miller, linux-crypto, linux-kernel, stable

On Wed, 13 May 2026 at 13:57:40 +0300, Anastasia Tishchenko wrote:
> diff --git a/crypto/ecc.c b/crypto/ecc.c
> index 43b0def3a225..6eb4d97a5f0d 100644
> --- a/crypto/ecc.c
> +++ b/crypto/ecc.c
> @@ -393,14 +393,26 @@ static uint128_t mul_64_64(u64 left, u64 right)
>  	return result;
>  }
>  
> -static uint128_t add_128_128(uint128_t a, uint128_t b)
> +/* Calculate addition with overflow checking. Returns true on wrap-around,
> + * false otherwise.
> + */
> +static bool check_add_128_128_overflow(uint128_t *result, uint128_t a,
> +				       uint128_t b)
>  {
> -	uint128_t result;
> +	bool carry;
>  
> -	result.m_low = a.m_low + b.m_low;
> -	result.m_high = a.m_high + b.m_high + (result.m_low < a.m_low);
> +	result->m_low = a.m_low + b.m_low;
> +	carry = (result->m_low < a.m_low);
>  
> -	return result;
> +	result->m_high = a.m_high + b.m_high + carry;

If CONFIG_ARCH_SUPPORTS_INT128 is defined, you can convert them to
"unsigned __int128" as done in mul_64_64(), and use check_add_overflow()
to get the carry.

> +
> +	/* Using constant-time bitwise arithmetic to prevent timing
> +	 * side-channels.
> +	 */
> +	carry = (result->m_high < a.m_high) |
> +		((result->m_high == a.m_high) & carry);
> +
> +	return carry;
>  }
>  

Regards,
Qingfang

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication
  2026-05-13 12:39 ` Qingfang Deng
@ 2026-05-13 14:09   ` Lukas Wunner
  0 siblings, 0 replies; 4+ messages in thread
From: Lukas Wunner @ 2026-05-13 14:09 UTC (permalink / raw)
  To: Qingfang Deng
  Cc: Anastasia Tishchenko, Stefan Berger, Ignat Korchagin, Herbert Xu,
	David S. Miller, linux-crypto, linux-kernel, stable

On Wed, May 13, 2026 at 08:39:48PM +0800, Qingfang Deng wrote:
> On Wed, 13 May 2026 at 13:57:40 +0300, Anastasia Tishchenko wrote:
> > +++ b/crypto/ecc.c
> > @@ -393,14 +393,26 @@ static uint128_t mul_64_64(u64 left, u64 right)
> >  	return result;
> >  }
> >  
> > -static uint128_t add_128_128(uint128_t a, uint128_t b)
> > +/* Calculate addition with overflow checking. Returns true on wrap-around,
> > + * false otherwise.
> > + */
> > +static bool check_add_128_128_overflow(uint128_t *result, uint128_t a,
> > +				       uint128_t b)
> >  {
> > -	uint128_t result;
> > +	bool carry;
> >  
> > -	result.m_low = a.m_low + b.m_low;
> > -	result.m_high = a.m_high + b.m_high + (result.m_low < a.m_low);
> > +	result->m_low = a.m_low + b.m_low;
> > +	carry = (result->m_low < a.m_low);
> >  
> > -	return result;
> > +	result->m_high = a.m_high + b.m_high + carry;
> 
> If CONFIG_ARCH_SUPPORTS_INT128 is defined, you can convert them to
> "unsigned __int128" as done in mul_64_64(), and use check_add_overflow()
> to get the carry.

Okay, but that would be a separate feature on top of this fix.
This fix applies to *all* architectures whereas using native
128-bit integers would only solve the problem on arches which
support such integers.

ARCH_SUPPORTS_INT128 actually depends on CC_HAS_INT128 on all arches
that support it (arm64, loongarch, riscv, s390, x86).  Interestingly,
s390 additionally requires CC_IS_CLANG.  Commit fbac266f095d ("s390:
select ARCH_SUPPORTS_INT128") explains that "gcc generates inefficient
code, which may lead to stack overflows" when handling 128-bit integers.
It goes on to say: "The gcc generated functions have 6kb stack frames,
compared to only 1kb of the code generated with clang."

That reminds me of the high stack usage issue we're seeing when compiling
crypto/ecc.c on arm 32-bit:

https://lore.kernel.org/r/7e3d64a53efb28740b32d1f934e78c10086208ab.1778073318.git.lukas@wunner.de/

I'm fine with adding native 128-bit usage to check_add_128_128_overflow()
on top of this fix if somebody wants to submit a patch.  But I suspect
it might only actually be useful on s390 with clang.

Thanks,

Lukas

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication
  2026-05-13 10:57 [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication Anastasia Tishchenko
  2026-05-13 12:39 ` Qingfang Deng
@ 2026-05-13 14:31 ` Lukas Wunner
  1 sibling, 0 replies; 4+ messages in thread
From: Lukas Wunner @ 2026-05-13 14:31 UTC (permalink / raw)
  To: Anastasia Tishchenko
  Cc: Stefan Berger, Ignat Korchagin, Herbert Xu, David S . Miller,
	linux-crypto, linux-kernel, stable

On Wed, May 13, 2026 at 01:57:40PM +0300, Anastasia Tishchenko wrote:
> The carry flag calculation fails when r01.m_high is saturated
> (0xFFFFFFFFFFFFFFFF) and addition of lower bits overflows.
> 
> The condition (r01.m_high < product.m_high) doesn't handle the case
> where r01.m_high == product.m_high and an additional carry exists
> from lower-bit overflow.
> 
> When commit 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
> introduced crypto/ecc.c, it split the muladd() function in the
> micro-ecc library into separate mul_64_64() and add_128_128() helpers.
> It seems the check got lost in translation.
> 
> Add proper handling for this boundary by accounting for the carry
> from the lower addition.
> 
> Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
> Signed-off-by: Anastasia Tishchenko <sv3iry@gmail.com>
> Cc: stable@vger.kernel.org # v4.8+

Reviewed-by: Lukas Wunner <lukas@wunner.de>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-05-13 14:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 10:57 [PATCH v2] crypto: ecc - Fix carry overflow in vli multiplication Anastasia Tishchenko
2026-05-13 12:39 ` Qingfang Deng
2026-05-13 14:09   ` Lukas Wunner
2026-05-13 14:31 ` Lukas Wunner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox