* [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data
[not found] <20241230001418.74739-1-ebiggers@kernel.org>
@ 2024-12-30 0:13 ` Eric Biggers
2025-01-02 11:50 ` Christophe Leroy
2024-12-30 0:14 ` [PATCH v2 20/29] crypto: nx - use the new scatterwalk functions Eric Biggers
1 sibling, 1 reply; 4+ messages in thread
From: Eric Biggers @ 2024-12-30 0:13 UTC (permalink / raw)
To: linux-crypto
Cc: netdev, linux-kernel, Christophe Leroy, Danny Tsen,
Michael Ellerman, Naveen N Rao, Nicholas Piggin, linuxppc-dev
From: Eric Biggers <ebiggers@google.com>
p10_aes_gcm_crypt() is abusing the scatter_walk API to get the virtual
address for the first source scatterlist element. But this code is only
built for PPC64 which is a !HIGHMEM platform, and it can read past a
page boundary from the address returned by scatterwalk_map() which means
it already assumes the address is from the kernel's direct map. Thus,
just use sg_virt() instead to get the same result in a simpler way.
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Danny Tsen <dtsen@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
This patch is part of a long series touching many files, so I have
limited the Cc list on the full series. If you want the full series and
did not receive it, please retrieve it from lore.kernel.org.
arch/powerpc/crypto/aes-gcm-p10-glue.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/crypto/aes-gcm-p10-glue.c b/arch/powerpc/crypto/aes-gcm-p10-glue.c
index f37b3d13fc53..2862c3cf8e41 100644
--- a/arch/powerpc/crypto/aes-gcm-p10-glue.c
+++ b/arch/powerpc/crypto/aes-gcm-p10-glue.c
@@ -212,11 +212,10 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
struct p10_aes_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
u8 databuf[sizeof(struct gcm_ctx) + PPC_ALIGN];
struct gcm_ctx *gctx = PTR_ALIGN((void *)databuf, PPC_ALIGN);
u8 hashbuf[sizeof(struct Hash_ctx) + PPC_ALIGN];
struct Hash_ctx *hash = PTR_ALIGN((void *)hashbuf, PPC_ALIGN);
- struct scatter_walk assoc_sg_walk;
struct skcipher_walk walk;
u8 *assocmem = NULL;
u8 *assoc;
unsigned int cryptlen = req->cryptlen;
unsigned char ivbuf[AES_BLOCK_SIZE+PPC_ALIGN];
@@ -232,12 +231,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
memset(ivbuf, 0, sizeof(ivbuf));
memcpy(iv, riv, GCM_IV_SIZE);
/* Linearize assoc, if not already linear */
if (req->src->length >= assoclen && req->src->length) {
- scatterwalk_start(&assoc_sg_walk, req->src);
- assoc = scatterwalk_map(&assoc_sg_walk);
+ assoc = sg_virt(req->src); /* ppc64 is !HIGHMEM */
} else {
gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
GFP_KERNEL : GFP_ATOMIC;
/* assoc can be any length, so must be on heap */
@@ -251,13 +249,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
vsx_begin();
gcmp10_init(gctx, iv, (unsigned char *) &ctx->enc_key, hash, assoc, assoclen);
vsx_end();
- if (!assocmem)
- scatterwalk_unmap(assoc);
- else
+ if (assocmem)
kfree(assocmem);
if (enc)
ret = skcipher_walk_aead_encrypt(&walk, req, false);
else
--
2.47.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v2 20/29] crypto: nx - use the new scatterwalk functions
[not found] <20241230001418.74739-1-ebiggers@kernel.org>
2024-12-30 0:13 ` [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data Eric Biggers
@ 2024-12-30 0:14 ` Eric Biggers
1 sibling, 0 replies; 4+ messages in thread
From: Eric Biggers @ 2024-12-30 0:14 UTC (permalink / raw)
To: linux-crypto
Cc: netdev, linux-kernel, Christophe Leroy, Madhavan Srinivasan,
Michael Ellerman, Naveen N Rao, Nicholas Piggin, linuxppc-dev
From: Eric Biggers <ebiggers@google.com>
- In nx_walk_and_build(), use scatterwalk_start_at_pos() instead of a
more complex way to achieve the same result.
- Also in nx_walk_and_build(), use the new functions scatterwalk_next()
which consolidates scatterwalk_clamp() and scatterwalk_map(), and use
scatterwalk_done_src() which consolidates scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done(). Remove unnecessary
code that seemed to be intended to advance to the next sg entry, which
is already handled by the scatterwalk functions.
Note that nx_walk_and_build() does not actually read or write the
mapped virtual address, and thus it is misusing the scatter_walk API.
It really should just access the scatterlist directly. This patch
does not try to address this existing issue.
- In nx_gca(), use memcpy_from_sglist() instead of a more complex way to
achieve the same result.
- In various functions, replace calls to scatterwalk_map_and_copy() with
memcpy_from_sglist() or memcpy_to_sglist() as appropriate. Note that
this eliminates the confusing 'out' argument (which this driver had
tried to work around by defining the missing constants for it...)
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
This patch is part of a long series touching many files, so I have
limited the Cc list on the full series. If you want the full series and
did not receive it, please retrieve it from lore.kernel.org.
drivers/crypto/nx/nx-aes-ccm.c | 16 ++++++----------
drivers/crypto/nx/nx-aes-gcm.c | 17 ++++++-----------
drivers/crypto/nx/nx.c | 31 +++++--------------------------
drivers/crypto/nx/nx.h | 3 ---
4 files changed, 17 insertions(+), 50 deletions(-)
diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c
index c843f4c6f684..56a0b3a67c33 100644
--- a/drivers/crypto/nx/nx-aes-ccm.c
+++ b/drivers/crypto/nx/nx-aes-ccm.c
@@ -215,17 +215,15 @@ static int generate_pat(u8 *iv,
*/
if (b1) {
memset(b1, 0, 16);
if (assoclen <= 65280) {
*(u16 *)b1 = assoclen;
- scatterwalk_map_and_copy(b1 + 2, req->src, 0,
- iauth_len, SCATTERWALK_FROM_SG);
+ memcpy_from_sglist(b1 + 2, req->src, 0, iauth_len);
} else {
*(u16 *)b1 = (u16)(0xfffe);
*(u32 *)&b1[2] = assoclen;
- scatterwalk_map_and_copy(b1 + 6, req->src, 0,
- iauth_len, SCATTERWALK_FROM_SG);
+ memcpy_from_sglist(b1 + 6, req->src, 0, iauth_len);
}
}
/* now copy any remaining AAD to scatterlist and call nx... */
if (!assoclen) {
@@ -339,13 +337,12 @@ static int ccm_nx_decrypt(struct aead_request *req,
spin_lock_irqsave(&nx_ctx->lock, irq_flags);
nbytes -= authsize;
/* copy out the auth tag to compare with later */
- scatterwalk_map_and_copy(priv->oauth_tag,
- req->src, nbytes + req->assoclen, authsize,
- SCATTERWALK_FROM_SG);
+ memcpy_from_sglist(priv->oauth_tag, req->src, nbytes + req->assoclen,
+ authsize);
rc = generate_pat(iv, req, nx_ctx, authsize, nbytes, assoclen,
csbcpb->cpb.aes_ccm.in_pat_or_b0);
if (rc)
goto out;
@@ -463,13 +460,12 @@ static int ccm_nx_encrypt(struct aead_request *req,
processed += to_process;
} while (processed < nbytes);
/* copy out the auth tag */
- scatterwalk_map_and_copy(csbcpb->cpb.aes_ccm.out_pat_or_mac,
- req->dst, nbytes + req->assoclen, authsize,
- SCATTERWALK_TO_SG);
+ memcpy_to_sglist(req->dst, nbytes + req->assoclen,
+ csbcpb->cpb.aes_ccm.out_pat_or_mac, authsize);
out:
spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
return rc;
}
diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c
index 4a796318b430..b7fe2de96d96 100644
--- a/drivers/crypto/nx/nx-aes-gcm.c
+++ b/drivers/crypto/nx/nx-aes-gcm.c
@@ -101,20 +101,17 @@ static int nx_gca(struct nx_crypto_ctx *nx_ctx,
u8 *out,
unsigned int assoclen)
{
int rc;
struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead;
- struct scatter_walk walk;
struct nx_sg *nx_sg = nx_ctx->in_sg;
unsigned int nbytes = assoclen;
unsigned int processed = 0, to_process;
unsigned int max_sg_len;
if (nbytes <= AES_BLOCK_SIZE) {
- scatterwalk_start(&walk, req->src);
- scatterwalk_copychunks(out, &walk, nbytes, SCATTERWALK_FROM_SG);
- scatterwalk_done(&walk, SCATTERWALK_FROM_SG, 0);
+ memcpy_from_sglist(out, req->src, 0, nbytes);
return 0;
}
NX_CPB_FDM(csbcpb_aead) &= ~NX_FDM_CONTINUATION;
@@ -389,23 +386,21 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc,
} while (processed < nbytes);
mac:
if (enc) {
/* copy out the auth tag */
- scatterwalk_map_and_copy(
- csbcpb->cpb.aes_gcm.out_pat_or_mac,
+ memcpy_to_sglist(
req->dst, req->assoclen + nbytes,
- crypto_aead_authsize(crypto_aead_reqtfm(req)),
- SCATTERWALK_TO_SG);
+ csbcpb->cpb.aes_gcm.out_pat_or_mac,
+ crypto_aead_authsize(crypto_aead_reqtfm(req)));
} else {
u8 *itag = nx_ctx->priv.gcm.iauth_tag;
u8 *otag = csbcpb->cpb.aes_gcm.out_pat_or_mac;
- scatterwalk_map_and_copy(
+ memcpy_from_sglist(
itag, req->src, req->assoclen + nbytes,
- crypto_aead_authsize(crypto_aead_reqtfm(req)),
- SCATTERWALK_FROM_SG);
+ crypto_aead_authsize(crypto_aead_reqtfm(req)));
rc = crypto_memneq(itag, otag,
crypto_aead_authsize(crypto_aead_reqtfm(req))) ?
-EBADMSG : 0;
}
out:
diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c
index 010e87d9da36..dd95e5361d88 100644
--- a/drivers/crypto/nx/nx.c
+++ b/drivers/crypto/nx/nx.c
@@ -151,44 +151,23 @@ struct nx_sg *nx_walk_and_build(struct nx_sg *nx_dst,
unsigned int start,
unsigned int *src_len)
{
struct scatter_walk walk;
struct nx_sg *nx_sg = nx_dst;
- unsigned int n, offset = 0, len = *src_len;
+ unsigned int n, len = *src_len;
char *dst;
/* we need to fast forward through @start bytes first */
- for (;;) {
- scatterwalk_start(&walk, sg_src);
-
- if (start < offset + sg_src->length)
- break;
-
- offset += sg_src->length;
- sg_src = sg_next(sg_src);
- }
-
- /* start - offset is the number of bytes to advance in the scatterlist
- * element we're currently looking at */
- scatterwalk_advance(&walk, start - offset);
+ scatterwalk_start_at_pos(&walk, sg_src, start);
while (len && (nx_sg - nx_dst) < sglen) {
- n = scatterwalk_clamp(&walk, len);
- if (!n) {
- /* In cases where we have scatterlist chain sg_next
- * handles with it properly */
- scatterwalk_start(&walk, sg_next(walk.sg));
- n = scatterwalk_clamp(&walk, len);
- }
- dst = scatterwalk_map(&walk);
+ dst = scatterwalk_next(&walk, len, &n);
nx_sg = nx_build_sg_list(nx_sg, dst, &n, sglen - (nx_sg - nx_dst));
- len -= n;
- scatterwalk_unmap(dst);
- scatterwalk_advance(&walk, n);
- scatterwalk_done(&walk, SCATTERWALK_FROM_SG, len);
+ scatterwalk_done_src(&walk, dst, n);
+ len -= n;
}
/* update to_process */
*src_len -= len;
/* return the moved destination pointer */
diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h
index 2697baebb6a3..e1b4b6927bec 100644
--- a/drivers/crypto/nx/nx.h
+++ b/drivers/crypto/nx/nx.h
@@ -187,9 +187,6 @@ extern struct shash_alg nx_shash_aes_xcbc_alg;
extern struct shash_alg nx_shash_sha512_alg;
extern struct shash_alg nx_shash_sha256_alg;
extern struct nx_crypto_driver nx_driver;
-#define SCATTERWALK_TO_SG 1
-#define SCATTERWALK_FROM_SG 0
-
#endif
--
2.47.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data
2024-12-30 0:13 ` [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data Eric Biggers
@ 2025-01-02 11:50 ` Christophe Leroy
2025-01-02 17:24 ` Eric Biggers
0 siblings, 1 reply; 4+ messages in thread
From: Christophe Leroy @ 2025-01-02 11:50 UTC (permalink / raw)
To: Eric Biggers, linux-crypto
Cc: netdev, linux-kernel, Danny Tsen, Michael Ellerman, Naveen N Rao,
Nicholas Piggin, linuxppc-dev
Le 30/12/2024 à 01:13, Eric Biggers a écrit :
> From: Eric Biggers <ebiggers@google.com>
>
> p10_aes_gcm_crypt() is abusing the scatter_walk API to get the virtual
> address for the first source scatterlist element. But this code is only
> built for PPC64 which is a !HIGHMEM platform, and it can read past a
> page boundary from the address returned by scatterwalk_map() which means
> it already assumes the address is from the kernel's direct map. Thus,
> just use sg_virt() instead to get the same result in a simpler way.
>
> Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
> Cc: Danny Tsen <dtsen@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Naveen N Rao <naveen@kernel.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---
>
> This patch is part of a long series touching many files, so I have
> limited the Cc list on the full series. If you want the full series and
> did not receive it, please retrieve it from lore.kernel.org.
>
> arch/powerpc/crypto/aes-gcm-p10-glue.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/crypto/aes-gcm-p10-glue.c b/arch/powerpc/crypto/aes-gcm-p10-glue.c
> index f37b3d13fc53..2862c3cf8e41 100644
> --- a/arch/powerpc/crypto/aes-gcm-p10-glue.c
> +++ b/arch/powerpc/crypto/aes-gcm-p10-glue.c
> @@ -212,11 +212,10 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
> struct p10_aes_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
> u8 databuf[sizeof(struct gcm_ctx) + PPC_ALIGN];
> struct gcm_ctx *gctx = PTR_ALIGN((void *)databuf, PPC_ALIGN);
> u8 hashbuf[sizeof(struct Hash_ctx) + PPC_ALIGN];
> struct Hash_ctx *hash = PTR_ALIGN((void *)hashbuf, PPC_ALIGN);
> - struct scatter_walk assoc_sg_walk;
> struct skcipher_walk walk;
> u8 *assocmem = NULL;
> u8 *assoc;
> unsigned int cryptlen = req->cryptlen;
> unsigned char ivbuf[AES_BLOCK_SIZE+PPC_ALIGN];
> @@ -232,12 +231,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
> memset(ivbuf, 0, sizeof(ivbuf));
> memcpy(iv, riv, GCM_IV_SIZE);
>
> /* Linearize assoc, if not already linear */
> if (req->src->length >= assoclen && req->src->length) {
> - scatterwalk_start(&assoc_sg_walk, req->src);
> - assoc = scatterwalk_map(&assoc_sg_walk);
> + assoc = sg_virt(req->src); /* ppc64 is !HIGHMEM */
> } else {
> gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
> GFP_KERNEL : GFP_ATOMIC;
>
> /* assoc can be any length, so must be on heap */
> @@ -251,13 +249,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
>
> vsx_begin();
> gcmp10_init(gctx, iv, (unsigned char *) &ctx->enc_key, hash, assoc, assoclen);
> vsx_end();
>
> - if (!assocmem)
> - scatterwalk_unmap(assoc);
> - else
> + if (assocmem)
> kfree(assocmem);
kfree() accepts a NULL pointer, you can call kfree(assocmem) without 'if
(assocmem)'
>
> if (enc)
> ret = skcipher_walk_aead_encrypt(&walk, req, false);
> else
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data
2025-01-02 11:50 ` Christophe Leroy
@ 2025-01-02 17:24 ` Eric Biggers
0 siblings, 0 replies; 4+ messages in thread
From: Eric Biggers @ 2025-01-02 17:24 UTC (permalink / raw)
To: Christophe Leroy
Cc: linux-crypto, netdev, linux-kernel, Danny Tsen, Michael Ellerman,
Naveen N Rao, Nicholas Piggin, linuxppc-dev
On Thu, Jan 02, 2025 at 12:50:50PM +0100, Christophe Leroy wrote:
>
>
> Le 30/12/2024 à 01:13, Eric Biggers a écrit :
> > From: Eric Biggers <ebiggers@google.com>
> >
> > p10_aes_gcm_crypt() is abusing the scatter_walk API to get the virtual
> > address for the first source scatterlist element. But this code is only
> > built for PPC64 which is a !HIGHMEM platform, and it can read past a
> > page boundary from the address returned by scatterwalk_map() which means
> > it already assumes the address is from the kernel's direct map. Thus,
> > just use sg_virt() instead to get the same result in a simpler way.
> >
> > Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
> > Cc: Danny Tsen <dtsen@linux.ibm.com>
> > Cc: Michael Ellerman <mpe@ellerman.id.au>
> > Cc: Naveen N Rao <naveen@kernel.org>
> > Cc: Nicholas Piggin <npiggin@gmail.com>
> > Cc: linuxppc-dev@lists.ozlabs.org
> > Signed-off-by: Eric Biggers <ebiggers@google.com>
> > ---
> >
> > This patch is part of a long series touching many files, so I have
> > limited the Cc list on the full series. If you want the full series and
> > did not receive it, please retrieve it from lore.kernel.org.
> >
> > arch/powerpc/crypto/aes-gcm-p10-glue.c | 8 ++------
> > 1 file changed, 2 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/powerpc/crypto/aes-gcm-p10-glue.c b/arch/powerpc/crypto/aes-gcm-p10-glue.c
> > index f37b3d13fc53..2862c3cf8e41 100644
> > --- a/arch/powerpc/crypto/aes-gcm-p10-glue.c
> > +++ b/arch/powerpc/crypto/aes-gcm-p10-glue.c
> > @@ -212,11 +212,10 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
> > struct p10_aes_gcm_ctx *ctx = crypto_tfm_ctx(tfm);
> > u8 databuf[sizeof(struct gcm_ctx) + PPC_ALIGN];
> > struct gcm_ctx *gctx = PTR_ALIGN((void *)databuf, PPC_ALIGN);
> > u8 hashbuf[sizeof(struct Hash_ctx) + PPC_ALIGN];
> > struct Hash_ctx *hash = PTR_ALIGN((void *)hashbuf, PPC_ALIGN);
> > - struct scatter_walk assoc_sg_walk;
> > struct skcipher_walk walk;
> > u8 *assocmem = NULL;
> > u8 *assoc;
> > unsigned int cryptlen = req->cryptlen;
> > unsigned char ivbuf[AES_BLOCK_SIZE+PPC_ALIGN];
> > @@ -232,12 +231,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
> > memset(ivbuf, 0, sizeof(ivbuf));
> > memcpy(iv, riv, GCM_IV_SIZE);
> > /* Linearize assoc, if not already linear */
> > if (req->src->length >= assoclen && req->src->length) {
> > - scatterwalk_start(&assoc_sg_walk, req->src);
> > - assoc = scatterwalk_map(&assoc_sg_walk);
> > + assoc = sg_virt(req->src); /* ppc64 is !HIGHMEM */
> > } else {
> > gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
> > GFP_KERNEL : GFP_ATOMIC;
> > /* assoc can be any length, so must be on heap */
> > @@ -251,13 +249,11 @@ static int p10_aes_gcm_crypt(struct aead_request *req, u8 *riv,
> > vsx_begin();
> > gcmp10_init(gctx, iv, (unsigned char *) &ctx->enc_key, hash, assoc, assoclen);
> > vsx_end();
> > - if (!assocmem)
> > - scatterwalk_unmap(assoc);
> > - else
> > + if (assocmem)
> > kfree(assocmem);
>
> kfree() accepts a NULL pointer, you can call kfree(assocmem) without 'if
> (assocmem)'
The existing code did that too, but sure I'll change that in v3.
- Eric
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-01-02 17:24 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20241230001418.74739-1-ebiggers@kernel.org>
2024-12-30 0:13 ` [PATCH v2 10/29] crypto: powerpc/p10-aes-gcm - simplify handling of linear associated data Eric Biggers
2025-01-02 11:50 ` Christophe Leroy
2025-01-02 17:24 ` Eric Biggers
2024-12-30 0:14 ` [PATCH v2 20/29] crypto: nx - use the new scatterwalk functions Eric Biggers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).