* Re: [PATCH] crypto : ecc - Fix carry overflow in vli multiplication
2026-05-08 11:48 [PATCH] crypto : ecc - Fix carry overflow in vli multiplication Anastasia Tishchenko
@ 2026-05-08 14:36 ` Stefan Berger
2026-05-11 5:30 ` Lukas Wunner
2026-05-12 13:48 ` Lukas Wunner
2 siblings, 0 replies; 6+ messages in thread
From: Stefan Berger @ 2026-05-08 14:36 UTC (permalink / raw)
To: Anastasia Tishchenko, Lukas Wunner, Ignat Korchagin, Herbert Xu,
David S . Miller
Cc: linux-crypto, linux-kernel
On 5/8/26 7:48 AM, Anastasia Tishchenko wrote:
> The carry flag calculation fails when r01.m_high is saturated
> (0xFFFFFFFFFFFFFFFF) and addition of lower bits overflows.
>
> The condition (r01.m_high < product.m_high) doesn't handle the case
> where r01.m_high == product.m_high and an additional carry exists
> from lower-bit overflow.
>
> Add proper handling for this boundary by accounting for the carry
> from the lower addition.
>
> This issue was discovered during formal verification of ECC functions.
Thanks!
>
> Signed-off-by: Anastasia Tishchenko <sv3iry@gmail.com>
> ---
> crypto/ecc.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/crypto/ecc.c b/crypto/ecc.c
> index 43b0def3a225..dfe96471407c 100644
> --- a/crypto/ecc.c
> +++ b/crypto/ecc.c
> @@ -427,7 +427,10 @@ static void vli_mult(u64 *result, const u64 *left, const u64 *right,
> product = mul_64_64(left[i], right[k - i]);
>
> r01 = add_128_128(r01, product);
> - r2 += (r01.m_high < product.m_high);
Maybe the following or something like lt_128(r01, product) would be
'better' replacing 'r01 < product':
if (cmp_128(r01, product) < 0)
r2++;
/* Compare two uint128_t; returns -1 if a<b, 0 if a == b, 1 otherwise */
static int cmp_128(uint128_t a, uint128_t b)
{
if (a.m_high < b.m_high)
return -1;
if (a.m_high > b.m_high)
return 1;
if (a.m_low < b.m_low)
return -1;
if (a.m_low > b.m_low)
return 1;
return 0;
}
/* r01 < product */
if (lt_128(r01, product))
r2++;
/* Check a < b; return 1 if a < b, 0 otherwise */
static int lt_128(uint128_t a, uint128_t b)
{
if (a.m_high < b.m_high)
return 1;
if (a.m_high > b.m_high)
return 0;
if (a.m_low < b.m_low)
return 1;
return 0;
}
> + if (r01.m_high != product.m_high)
> + r2 += (r01.m_high < product.m_high);
> + else
> + r2 += (r01.m_low < product.m_low);
> }
>
> result[k] = r01.m_low;
> @@ -488,7 +491,10 @@ static void vli_square(u64 *result, const u64 *left, unsigned int ndigits)
> }
>
> r01 = add_128_128(r01, product);
> - r2 += (r01.m_high < product.m_high);
> + if (r01.m_high != product.m_high)
> + r2 += (r01.m_high < product.m_high);
> + else
> + r2 += (r01.m_low < product.m_low);
> }
>
> result[k] = r01.m_low;
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] crypto : ecc - Fix carry overflow in vli multiplication
2026-05-08 11:48 [PATCH] crypto : ecc - Fix carry overflow in vli multiplication Anastasia Tishchenko
2026-05-08 14:36 ` Stefan Berger
@ 2026-05-11 5:30 ` Lukas Wunner
2026-05-12 13:48 ` Lukas Wunner
2 siblings, 0 replies; 6+ messages in thread
From: Lukas Wunner @ 2026-05-11 5:30 UTC (permalink / raw)
To: Anastasia Tishchenko
Cc: Ignat Korchagin, Stefan Berger, Herbert Xu, David S . Miller,
linux-crypto, linux-kernel
On Fri, May 08, 2026 at 02:48:44PM +0300, Anastasia Tishchenko wrote:
> The carry flag calculation fails when r01.m_high is saturated
> (0xFFFFFFFFFFFFFFFF) and addition of lower bits overflows.
>
> The condition (r01.m_high < product.m_high) doesn't handle the case
> where r01.m_high == product.m_high and an additional carry exists
> from lower-bit overflow.
>
> Add proper handling for this boundary by accounting for the carry
> from the lower addition.
[...]
> +++ b/crypto/ecc.c
> @@ -427,7 +427,10 @@ static void vli_mult(u64 *result, const u64 *left, const u64 *right,
> product = mul_64_64(left[i], right[k - i]);
>
> r01 = add_128_128(r01, product);
> - r2 += (r01.m_high < product.m_high);
> + if (r01.m_high != product.m_high)
> + r2 += (r01.m_high < product.m_high);
> + else
> + r2 += (r01.m_low < product.m_low);
> }
>
> result[k] = r01.m_low;
Thanks for spotting this.
crypto/ecc.c is derived from Ken MacKay's micro-ecc library and
that library has always had the check that you're adding here:
https://github.com/kmackay/micro-ecc/blob/master/uECC.c#L403
When commit 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
introduced crypto/ecc.c, it split the muladd() function in the
micro-ecc library into separate mul_64_64() and add_128_128() helpers.
The function names suggest that this was probably done to allow
moving them to include/linux/math64.h and/or lib/math/div64.c
in case they're needed elsewhere in the kernel later.
It seems the check got lost in translation.
However while your patch looks correct to me, I think the overflow
check isn't very obvious or readable. And I don't like how it leaks
outside the add_128_128() helper.
The kernel gained a check_add_overflow() helper relatively recently
in include/linux/overflow.h. What I'd prefer is if you could rename
add_128_128() to check_add_128_128_overflow() and let it return a bool
indicating whether an overflow occurred. Then r2 is simply incremented
if the return value is true. Optionally you could add a third parameter
for the result (like check_add_overflow() does), but that's not strictly
needed for the callers in crypto/ecc.c and would merely be in preparation
for reuse of the function by other code in the kernel.
Does that make sense?
Please add these tags if/when respinning:
Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
Cc: stable@vger.kernel.org # v4.8+
Personally I order tags according to Bjorn's preference:
https://lore.kernel.org/r/20171026223701.GA25649@bhelgaas-glaptop.roam.corp.google.com/
It would also be good if you could include the backstory I've outlined
above in the commit message so that it's recorded in the git history
and doesn't have to be dug out from mailing list archives.
Thanks!
Lukas
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] crypto : ecc - Fix carry overflow in vli multiplication
2026-05-08 11:48 [PATCH] crypto : ecc - Fix carry overflow in vli multiplication Anastasia Tishchenko
2026-05-08 14:36 ` Stefan Berger
2026-05-11 5:30 ` Lukas Wunner
@ 2026-05-12 13:48 ` Lukas Wunner
2026-05-12 15:27 ` David Laight
[not found] ` <CAMtNSrhkfsGL04DtOb9M9fijHK=Xy0D-pBahiCqV+zPuJyRSLw@mail.gmail.com>
2 siblings, 2 replies; 6+ messages in thread
From: Lukas Wunner @ 2026-05-12 13:48 UTC (permalink / raw)
To: Anastasia Tishchenko
Cc: Ignat Korchagin, Stefan Berger, Herbert Xu, David S . Miller,
linux-crypto, linux-kernel
On Fri, May 08, 2026 at 02:48:44PM +0300, Anastasia Tishchenko wrote:
> The carry flag calculation fails when r01.m_high is saturated
> (0xFFFFFFFFFFFFFFFF) and addition of lower bits overflows.
>
> The condition (r01.m_high < product.m_high) doesn't handle the case
> where r01.m_high == product.m_high and an additional carry exists
> from lower-bit overflow.
>
> Add proper handling for this boundary by accounting for the carry
> from the lower addition.
[...]
> +++ b/crypto/ecc.c
> @@ -427,7 +427,10 @@ static void vli_mult(u64 *result, const u64 *left, const u64 *right,
> product = mul_64_64(left[i], right[k - i]);
>
> r01 = add_128_128(r01, product);
> - r2 += (r01.m_high < product.m_high);
> + if (r01.m_high != product.m_high)
> + r2 += (r01.m_high < product.m_high);
> + else
> + r2 += (r01.m_low < product.m_low);
> }
>
> result[k] = r01.m_low;
ICYMI, sashiko's AI-generated review alleges that the if-else condition
may cause a timing side channel vis-à-vis binary arithmetic:
https://sashiko.dev/#/patchset/20260508114844.29694-1-sv3iry%40gmail.com
You may want to address this if/when respinning your patch. If you do,
a code comment is probably merited to explain this subtlety.
Thanks,
Lukas
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH] crypto : ecc - Fix carry overflow in vli multiplication
2026-05-12 13:48 ` Lukas Wunner
@ 2026-05-12 15:27 ` David Laight
[not found] ` <CAMtNSrhkfsGL04DtOb9M9fijHK=Xy0D-pBahiCqV+zPuJyRSLw@mail.gmail.com>
1 sibling, 0 replies; 6+ messages in thread
From: David Laight @ 2026-05-12 15:27 UTC (permalink / raw)
To: Lukas Wunner
Cc: Anastasia Tishchenko, Ignat Korchagin, Stefan Berger, Herbert Xu,
David S . Miller, linux-crypto, linux-kernel
On Tue, 12 May 2026 15:48:11 +0200
Lukas Wunner <lukas@wunner.de> wrote:
> On Fri, May 08, 2026 at 02:48:44PM +0300, Anastasia Tishchenko wrote:
> > The carry flag calculation fails when r01.m_high is saturated
> > (0xFFFFFFFFFFFFFFFF) and addition of lower bits overflows.
> >
> > The condition (r01.m_high < product.m_high) doesn't handle the case
> > where r01.m_high == product.m_high and an additional carry exists
> > from lower-bit overflow.
> >
> > Add proper handling for this boundary by accounting for the carry
> > from the lower addition.
> [...]
> > +++ b/crypto/ecc.c
> > @@ -427,7 +427,10 @@ static void vli_mult(u64 *result, const u64 *left, const u64 *right,
> > product = mul_64_64(left[i], right[k - i]);
> >
> > r01 = add_128_128(r01, product);
> > - r2 += (r01.m_high < product.m_high);
> > + if (r01.m_high != product.m_high)
> > + r2 += (r01.m_high < product.m_high);
> > + else
> > + r2 += (r01.m_low < product.m_low);
> > }
> >
> > result[k] = r01.m_low;
>
> ICYMI, sashiko's AI-generated review alleges that the if-else condition
> may cause a timing side channel vis-à-vis binary arithmetic:
>
> https://sashiko.dev/#/patchset/20260508114844.29694-1-sv3iry%40gmail.com
>
> You may want to address this if/when respinning your patch. If you do,
> a code comment is probably merited to explain this subtlety.
Something like:
r2 += (r01.m_high < product.m_high);
r2 += (r01.m_high == product.m_high) & (r01.m_low < product.m_low);
would be constant time - but the compiler is very unlikely to generate
the object code you want on all (any?) architectures.
On x86 you want something like (pardon the pigeon assembler):
xor %rax,%rax
cmp r01.m_high, product.m_high
setc %al
lea r2, (r2, %rax)
sete %al
cmp r01.m_low, product.m_low
cmovnc %al, %ah
add r2, %rax
but I bet (two beers) you can't get it.
-- David
>
> Thanks,
>
> Lukas
>
^ permalink raw reply [flat|nested] 6+ messages in thread[parent not found: <CAMtNSrhkfsGL04DtOb9M9fijHK=Xy0D-pBahiCqV+zPuJyRSLw@mail.gmail.com>]
* Re: [PATCH] crypto : ecc - Fix carry overflow in vli multiplication
[not found] ` <CAMtNSrhkfsGL04DtOb9M9fijHK=Xy0D-pBahiCqV+zPuJyRSLw@mail.gmail.com>
@ 2026-05-13 8:32 ` Lukas Wunner
0 siblings, 0 replies; 6+ messages in thread
From: Lukas Wunner @ 2026-05-13 8:32 UTC (permalink / raw)
To: Anastasia
Cc: Ignat Korchagin, Stefan Berger, Herbert Xu, David S . Miller,
linux-crypto, linux-kernel
On Tue, May 12, 2026 at 06:20:14PM +0300, Anastasia wrote:
> However, I have a few questions regarding the proposed
> check_add_128_128_overflow():
>
> Should this function return u64 (carry flag) instead of bool to be
> consistent with existing overflow-checking functions like vli_add()?
I think if the return value can only be 1 or 0 (carry or no carry),
then bool is clearer. If the carry can be > 1 then u64 would be
merited.
I think it's confusing that vli_add() returns u64, but this was just
copy-pasted from the micro-ecc library, whose uECC_vli_add() returns
uECC_word_t:
https://github.com/kmackay/micro-ecc/blob/master/uECC.c#L333
> Regarding argument order: if the function returns a result, shouldn't it be
> the first argument rather than the third (like vli_add())?
I think by convention, the result or destination is the first argument,
as e.g. in memcpy(). I don't know why check_add_overflow() doesn't
adhere to that convention but suspect there's probably no good reason.
> And replace:
> r01 = add_128_128(r01, product);
> r2 += (r01.m_high < product.m_high);
> with:
> r2 += check_add_128_128_overflow(&r01, r01, product);
> in functions vli_mult, vli_umult and vli_square
LGTM.
BTW a small nit, the commit subject contains a superfluous blank
in-between "crypto" and the succeeding colon.
Thanks,
Lukas
^ permalink raw reply [flat|nested] 6+ messages in thread