* [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support
@ 2025-03-09 2:43 Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 1/8] crypto: api - Add cra_type->destroy hook Herbert Xu
` (7 more replies)
0 siblings, 8 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
v3 adds mixing SG with virtual, e.g., SG src with virtual dst.
This patch series adds reqeust chaining and virtual address support
to the crypto_acomp interface.
Herbert Xu (8):
crypto: api - Add cra_type->destroy hook
crypto: scomp - Remove tfm argument from alloc/free_ctx
crypto: acomp - Move stream management into scomp layer
crypto: scomp - Disable BH when taking per-cpu spin lock
crypto: acomp - Add request chaining and virtual addresses
crypto: testmgr - Remove NULL dst acomp tests
crypto: scomp - Remove support for most non-trivial destination SG
lists
crypto: scomp - Add chaining and virtual address support
crypto/842.c | 8 +-
crypto/acompress.c | 204 +++++++++++++++++++---
crypto/api.c | 10 ++
crypto/compress.h | 2 -
crypto/deflate.c | 4 +-
crypto/internal.h | 6 +-
crypto/lz4.c | 8 +-
crypto/lz4hc.c | 8 +-
crypto/lzo-rle.c | 8 +-
crypto/lzo.c | 8 +-
crypto/scompress.c | 225 +++++++++++++++----------
crypto/testmgr.c | 29 ----
crypto/zstd.c | 4 +-
drivers/crypto/cavium/zip/zip_crypto.c | 6 +-
drivers/crypto/cavium/zip/zip_crypto.h | 6 +-
include/crypto/acompress.h | 225 ++++++++++++++++++++++---
include/crypto/internal/acompress.h | 59 +++++--
include/crypto/internal/scompress.h | 18 +-
18 files changed, 611 insertions(+), 227 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 23+ messages in thread
* [v3 PATCH 1/8] crypto: api - Add cra_type->destroy hook
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 2/8] crypto: scomp - Remove tfm argument from alloc/free_ctx Herbert Xu
` (6 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
Add a cra_type->destroy hook so that resources can be freed after
the last user of a registered algorithm is gone.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/api.c | 10 ++++++++++
crypto/internal.h | 6 ++++--
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/crypto/api.c b/crypto/api.c
index c2c4eb14ef95..91957bb52f3f 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -707,5 +707,15 @@ void crypto_req_done(void *data, int err)
}
EXPORT_SYMBOL_GPL(crypto_req_done);
+void crypto_destroy_alg(struct crypto_alg *alg)
+{
+ if (alg->cra_type && alg->cra_type->destroy)
+ alg->cra_type->destroy(alg);
+
+ if (alg->cra_destroy)
+ alg->cra_destroy(alg);
+}
+EXPORT_SYMBOL_GPL(crypto_destroy_alg);
+
MODULE_DESCRIPTION("Cryptographic core API");
MODULE_LICENSE("GPL");
diff --git a/crypto/internal.h b/crypto/internal.h
index 08d43b40e7db..11567ea24fc3 100644
--- a/crypto/internal.h
+++ b/crypto/internal.h
@@ -40,6 +40,7 @@ struct crypto_type {
void (*show)(struct seq_file *m, struct crypto_alg *alg);
int (*report)(struct sk_buff *skb, struct crypto_alg *alg);
void (*free)(struct crypto_instance *inst);
+ void (*destroy)(struct crypto_alg *alg);
unsigned int type;
unsigned int maskclear;
@@ -127,6 +128,7 @@ void *crypto_create_tfm_node(struct crypto_alg *alg,
const struct crypto_type *frontend, int node);
void *crypto_clone_tfm(const struct crypto_type *frontend,
struct crypto_tfm *otfm);
+void crypto_destroy_alg(struct crypto_alg *alg);
static inline void *crypto_create_tfm(struct crypto_alg *alg,
const struct crypto_type *frontend)
@@ -163,8 +165,8 @@ static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg)
static inline void crypto_alg_put(struct crypto_alg *alg)
{
- if (refcount_dec_and_test(&alg->cra_refcnt) && alg->cra_destroy)
- alg->cra_destroy(alg);
+ if (refcount_dec_and_test(&alg->cra_refcnt))
+ crypto_destroy_alg(alg);
}
static inline int crypto_tmpl_get(struct crypto_template *tmpl)
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 2/8] crypto: scomp - Remove tfm argument from alloc/free_ctx
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 1/8] crypto: api - Add cra_type->destroy hook Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer Herbert Xu
` (5 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm. Remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/842.c | 8 ++++----
crypto/deflate.c | 4 ++--
crypto/lz4.c | 8 ++++----
crypto/lz4hc.c | 8 ++++----
crypto/lzo-rle.c | 8 ++++----
crypto/lzo.c | 8 ++++----
crypto/zstd.c | 4 ++--
drivers/crypto/cavium/zip/zip_crypto.c | 6 +++---
drivers/crypto/cavium/zip/zip_crypto.h | 6 +++---
include/crypto/internal/scompress.h | 8 ++++----
10 files changed, 34 insertions(+), 34 deletions(-)
diff --git a/crypto/842.c b/crypto/842.c
index e59e54d76960..2238478c3493 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -28,7 +28,7 @@ struct crypto842_ctx {
void *wmem; /* working memory for compress */
};
-static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
+static void *crypto842_alloc_ctx(void)
{
void *ctx;
@@ -43,14 +43,14 @@ static int crypto842_init(struct crypto_tfm *tfm)
{
struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
- ctx->wmem = crypto842_alloc_ctx(NULL);
+ ctx->wmem = crypto842_alloc_ctx();
if (IS_ERR(ctx->wmem))
return -ENOMEM;
return 0;
}
-static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void crypto842_free_ctx(void *ctx)
{
kfree(ctx);
}
@@ -59,7 +59,7 @@ static void crypto842_exit(struct crypto_tfm *tfm)
{
struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
- crypto842_free_ctx(NULL, ctx->wmem);
+ crypto842_free_ctx(ctx->wmem);
}
static int crypto842_compress(struct crypto_tfm *tfm,
diff --git a/crypto/deflate.c b/crypto/deflate.c
index 98e8bcb81a6a..1bf7184ad670 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -112,7 +112,7 @@ static int __deflate_init(void *ctx)
return ret;
}
-static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
+static void *deflate_alloc_ctx(void)
{
struct deflate_ctx *ctx;
int ret;
@@ -143,7 +143,7 @@ static void __deflate_exit(void *ctx)
deflate_decomp_exit(ctx);
}
-static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void deflate_free_ctx(void *ctx)
{
__deflate_exit(ctx);
kfree_sensitive(ctx);
diff --git a/crypto/lz4.c b/crypto/lz4.c
index 0606f8862e78..e66c6d1ba34f 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -16,7 +16,7 @@ struct lz4_ctx {
void *lz4_comp_mem;
};
-static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
+static void *lz4_alloc_ctx(void)
{
void *ctx;
@@ -31,14 +31,14 @@ static int lz4_init(struct crypto_tfm *tfm)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
- ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
+ ctx->lz4_comp_mem = lz4_alloc_ctx();
if (IS_ERR(ctx->lz4_comp_mem))
return -ENOMEM;
return 0;
}
-static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lz4_free_ctx(void *ctx)
{
vfree(ctx);
}
@@ -47,7 +47,7 @@ static void lz4_exit(struct crypto_tfm *tfm)
{
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
- lz4_free_ctx(NULL, ctx->lz4_comp_mem);
+ lz4_free_ctx(ctx->lz4_comp_mem);
}
static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index d7cc94aa2fcf..25a95b65aca5 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -15,7 +15,7 @@ struct lz4hc_ctx {
void *lz4hc_comp_mem;
};
-static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
+static void *lz4hc_alloc_ctx(void)
{
void *ctx;
@@ -30,14 +30,14 @@ static int lz4hc_init(struct crypto_tfm *tfm)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
- ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
+ ctx->lz4hc_comp_mem = lz4hc_alloc_ctx();
if (IS_ERR(ctx->lz4hc_comp_mem))
return -ENOMEM;
return 0;
}
-static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lz4hc_free_ctx(void *ctx)
{
vfree(ctx);
}
@@ -46,7 +46,7 @@ static void lz4hc_exit(struct crypto_tfm *tfm)
{
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
- lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
+ lz4hc_free_ctx(ctx->lz4hc_comp_mem);
}
static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
diff --git a/crypto/lzo-rle.c b/crypto/lzo-rle.c
index 0abc2d87f042..6c845e7d32f5 100644
--- a/crypto/lzo-rle.c
+++ b/crypto/lzo-rle.c
@@ -15,7 +15,7 @@ struct lzorle_ctx {
void *lzorle_comp_mem;
};
-static void *lzorle_alloc_ctx(struct crypto_scomp *tfm)
+static void *lzorle_alloc_ctx(void)
{
void *ctx;
@@ -30,14 +30,14 @@ static int lzorle_init(struct crypto_tfm *tfm)
{
struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
- ctx->lzorle_comp_mem = lzorle_alloc_ctx(NULL);
+ ctx->lzorle_comp_mem = lzorle_alloc_ctx();
if (IS_ERR(ctx->lzorle_comp_mem))
return -ENOMEM;
return 0;
}
-static void lzorle_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lzorle_free_ctx(void *ctx)
{
kvfree(ctx);
}
@@ -46,7 +46,7 @@ static void lzorle_exit(struct crypto_tfm *tfm)
{
struct lzorle_ctx *ctx = crypto_tfm_ctx(tfm);
- lzorle_free_ctx(NULL, ctx->lzorle_comp_mem);
+ lzorle_free_ctx(ctx->lzorle_comp_mem);
}
static int __lzorle_compress(const u8 *src, unsigned int slen,
diff --git a/crypto/lzo.c b/crypto/lzo.c
index 8338851c7406..035d62e2afe0 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -15,7 +15,7 @@ struct lzo_ctx {
void *lzo_comp_mem;
};
-static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
+static void *lzo_alloc_ctx(void)
{
void *ctx;
@@ -30,14 +30,14 @@ static int lzo_init(struct crypto_tfm *tfm)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
- ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
+ ctx->lzo_comp_mem = lzo_alloc_ctx();
if (IS_ERR(ctx->lzo_comp_mem))
return -ENOMEM;
return 0;
}
-static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void lzo_free_ctx(void *ctx)
{
kvfree(ctx);
}
@@ -46,7 +46,7 @@ static void lzo_exit(struct crypto_tfm *tfm)
{
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
- lzo_free_ctx(NULL, ctx->lzo_comp_mem);
+ lzo_free_ctx(ctx->lzo_comp_mem);
}
static int __lzo_compress(const u8 *src, unsigned int slen,
diff --git a/crypto/zstd.c b/crypto/zstd.c
index 154a969c83a8..68a093427944 100644
--- a/crypto/zstd.c
+++ b/crypto/zstd.c
@@ -103,7 +103,7 @@ static int __zstd_init(void *ctx)
return ret;
}
-static void *zstd_alloc_ctx(struct crypto_scomp *tfm)
+static void *zstd_alloc_ctx(void)
{
int ret;
struct zstd_ctx *ctx;
@@ -134,7 +134,7 @@ static void __zstd_exit(void *ctx)
zstd_decomp_exit(ctx);
}
-static void zstd_free_ctx(struct crypto_scomp *tfm, void *ctx)
+static void zstd_free_ctx(void *ctx)
{
__zstd_exit(ctx);
kfree_sensitive(ctx);
diff --git a/drivers/crypto/cavium/zip/zip_crypto.c b/drivers/crypto/cavium/zip/zip_crypto.c
index 1046a746d36f..a9c3efce8f2d 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.c
+++ b/drivers/crypto/cavium/zip/zip_crypto.c
@@ -236,7 +236,7 @@ int zip_comp_decompress(struct crypto_tfm *tfm,
} /* Legacy compress framework end */
/* SCOMP framework start */
-void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm)
+void *zip_alloc_scomp_ctx_deflate(void)
{
int ret;
struct zip_kernel_ctx *zip_ctx;
@@ -255,7 +255,7 @@ void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm)
return zip_ctx;
}
-void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm)
+void *zip_alloc_scomp_ctx_lzs(void)
{
int ret;
struct zip_kernel_ctx *zip_ctx;
@@ -274,7 +274,7 @@ void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm)
return zip_ctx;
}
-void zip_free_scomp_ctx(struct crypto_scomp *tfm, void *ctx)
+void zip_free_scomp_ctx(void *ctx)
{
struct zip_kernel_ctx *zip_ctx = ctx;
diff --git a/drivers/crypto/cavium/zip/zip_crypto.h b/drivers/crypto/cavium/zip/zip_crypto.h
index b59ddfcacd34..dbe20bfeb3e9 100644
--- a/drivers/crypto/cavium/zip/zip_crypto.h
+++ b/drivers/crypto/cavium/zip/zip_crypto.h
@@ -67,9 +67,9 @@ int zip_comp_decompress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen);
-void *zip_alloc_scomp_ctx_deflate(struct crypto_scomp *tfm);
-void *zip_alloc_scomp_ctx_lzs(struct crypto_scomp *tfm);
-void zip_free_scomp_ctx(struct crypto_scomp *tfm, void *zip_ctx);
+void *zip_alloc_scomp_ctx_deflate(void);
+void *zip_alloc_scomp_ctx_lzs(void);
+void zip_free_scomp_ctx(void *zip_ctx);
int zip_scomp_compress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx);
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 07a10fd2d321..6ba9974df7d3 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -31,8 +31,8 @@ struct crypto_scomp {
* @calg: Cmonn algorithm data structure shared with acomp
*/
struct scomp_alg {
- void *(*alloc_ctx)(struct crypto_scomp *tfm);
- void (*free_ctx)(struct crypto_scomp *tfm, void *ctx);
+ void *(*alloc_ctx)(void);
+ void (*free_ctx)(void *ctx);
int (*compress)(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx);
@@ -73,13 +73,13 @@ static inline struct scomp_alg *crypto_scomp_alg(struct crypto_scomp *tfm)
static inline void *crypto_scomp_alloc_ctx(struct crypto_scomp *tfm)
{
- return crypto_scomp_alg(tfm)->alloc_ctx(tfm);
+ return crypto_scomp_alg(tfm)->alloc_ctx();
}
static inline void crypto_scomp_free_ctx(struct crypto_scomp *tfm,
void *ctx)
{
- return crypto_scomp_alg(tfm)->free_ctx(tfm, ctx);
+ return crypto_scomp_alg(tfm)->free_ctx(ctx);
}
static inline int crypto_scomp_compress(struct crypto_scomp *tfm,
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 1/8] crypto: api - Add cra_type->destroy hook Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 2/8] crypto: scomp - Remove tfm argument from alloc/free_ctx Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-16 4:36 ` Eric Biggers
2025-03-09 2:43 ` [v3 PATCH 4/8] crypto: scomp - Disable BH when taking per-cpu spin lock Herbert Xu
` (4 subsequent siblings)
7 siblings, 1 reply; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp. This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/acompress.c | 30 ----------
crypto/compress.h | 2 -
crypto/scompress.c | 90 +++++++++++++++++++----------
include/crypto/acompress.h | 26 ++++++++-
include/crypto/internal/acompress.h | 17 +-----
include/crypto/internal/scompress.h | 12 +---
6 files changed, 84 insertions(+), 93 deletions(-)
diff --git a/crypto/acompress.c b/crypto/acompress.c
index 30176316140a..ef36ec31d73d 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -123,36 +123,6 @@ struct crypto_acomp *crypto_alloc_acomp_node(const char *alg_name, u32 type,
}
EXPORT_SYMBOL_GPL(crypto_alloc_acomp_node);
-struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp)
-{
- struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
- struct acomp_req *req;
-
- req = __acomp_request_alloc(acomp);
- if (req && (tfm->__crt_alg->cra_type != &crypto_acomp_type))
- return crypto_acomp_scomp_alloc_ctx(req);
-
- return req;
-}
-EXPORT_SYMBOL_GPL(acomp_request_alloc);
-
-void acomp_request_free(struct acomp_req *req)
-{
- struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
- struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
-
- if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
- crypto_acomp_scomp_free_ctx(req);
-
- if (req->base.flags & CRYPTO_ACOMP_ALLOC_OUTPUT) {
- acomp->dst_free(req->dst);
- req->dst = NULL;
- }
-
- __acomp_request_free(req);
-}
-EXPORT_SYMBOL_GPL(acomp_request_free);
-
void comp_prepare_alg(struct comp_alg_common *alg)
{
struct crypto_alg *base = &alg->base;
diff --git a/crypto/compress.h b/crypto/compress.h
index c3cedfb5e606..f7737a1fcbbd 100644
--- a/crypto/compress.h
+++ b/crypto/compress.h
@@ -15,8 +15,6 @@ struct acomp_req;
struct comp_alg_common;
int crypto_init_scomp_ops_async(struct crypto_tfm *tfm);
-struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req);
-void crypto_acomp_scomp_free_ctx(struct acomp_req *req);
void comp_prepare_alg(struct comp_alg_common *alg);
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 1cef6bb06a81..9b6d9bbbc73a 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -98,13 +98,62 @@ static int crypto_scomp_alloc_scratches(void)
return -ENOMEM;
}
+static void scomp_free_streams(struct scomp_alg *alg)
+{
+ struct crypto_acomp_stream __percpu *stream = alg->stream;
+ int i;
+
+ for_each_possible_cpu(i) {
+ struct crypto_acomp_stream *ps = per_cpu_ptr(stream, i);
+
+ if (!ps->ctx)
+ break;
+
+ alg->free_ctx(ps);
+ }
+
+ free_percpu(stream);
+}
+
+static int scomp_alloc_streams(struct scomp_alg *alg)
+{
+ struct crypto_acomp_stream __percpu *stream;
+ int i;
+
+ stream = alloc_percpu(struct crypto_acomp_stream);
+ if (!stream)
+ return -ENOMEM;
+
+ for_each_possible_cpu(i) {
+ struct crypto_acomp_stream *ps = per_cpu_ptr(stream, i);
+
+ ps->ctx = alg->alloc_ctx();
+ if (IS_ERR(ps->ctx)) {
+ scomp_free_streams(alg);
+ return PTR_ERR(ps->ctx);
+ }
+
+ spin_lock_init(&ps->lock);
+ }
+
+ alg->stream = stream;
+ return 0;
+}
+
static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
{
+ struct scomp_alg *alg = crypto_scomp_alg(__crypto_scomp_tfm(tfm));
int ret = 0;
mutex_lock(&scomp_lock);
+ if (!alg->stream) {
+ ret = scomp_alloc_streams(alg);
+ if (ret)
+ goto unlock;
+ }
if (!scomp_scratch_users++)
ret = crypto_scomp_alloc_scratches();
+unlock:
mutex_unlock(&scomp_lock);
return ret;
@@ -115,7 +164,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
void **tfm_ctx = acomp_tfm_ctx(tfm);
struct crypto_scomp *scomp = *tfm_ctx;
- void **ctx = acomp_request_ctx(req);
+ struct crypto_acomp_stream *stream;
struct scomp_scratch *scratch;
void *src, *dst;
unsigned int dlen;
@@ -148,12 +197,15 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
else
dst = scratch->dst;
+ stream = raw_cpu_ptr(crypto_scomp_alg(scomp)->stream);
+ spin_lock(&stream->lock);
if (dir)
ret = crypto_scomp_compress(scomp, src, req->slen,
- dst, &req->dlen, *ctx);
+ dst, &req->dlen, stream->ctx);
else
ret = crypto_scomp_decompress(scomp, src, req->slen,
- dst, &req->dlen, *ctx);
+ dst, &req->dlen, stream->ctx);
+ spin_unlock(&stream->lock);
if (!ret) {
if (!req->dst) {
req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL);
@@ -226,45 +278,19 @@ int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
crt->compress = scomp_acomp_compress;
crt->decompress = scomp_acomp_decompress;
crt->dst_free = sgl_free;
- crt->reqsize = sizeof(void *);
return 0;
}
-struct acomp_req *crypto_acomp_scomp_alloc_ctx(struct acomp_req *req)
+static void crypto_scomp_destroy(struct crypto_alg *alg)
{
- struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
- struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
- struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
- struct crypto_scomp *scomp = *tfm_ctx;
- void *ctx;
-
- ctx = crypto_scomp_alloc_ctx(scomp);
- if (IS_ERR(ctx)) {
- kfree(req);
- return NULL;
- }
-
- *req->__ctx = ctx;
-
- return req;
-}
-
-void crypto_acomp_scomp_free_ctx(struct acomp_req *req)
-{
- struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
- struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
- struct crypto_scomp **tfm_ctx = crypto_tfm_ctx(tfm);
- struct crypto_scomp *scomp = *tfm_ctx;
- void *ctx = *req->__ctx;
-
- if (ctx)
- crypto_scomp_free_ctx(scomp, ctx);
+ scomp_free_streams(__crypto_scomp_alg(alg));
}
static const struct crypto_type crypto_scomp_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_scomp_init_tfm,
+ .destroy = crypto_scomp_destroy,
#ifdef CONFIG_PROC_FS
.show = crypto_scomp_show,
#endif
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index b6d5136e689d..c4937709ad0e 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -10,8 +10,12 @@
#define _CRYPTO_ACOMP_H
#include <linux/atomic.h>
+#include <linux/compiler_types.h>
#include <linux/container_of.h>
#include <linux/crypto.h>
+#include <linux/slab.h>
+#include <linux/spinlock_types.h>
+#include <linux/types.h>
#define CRYPTO_ACOMP_ALLOC_OUTPUT 0x00000001
#define CRYPTO_ACOMP_DST_MAX 131072
@@ -54,8 +58,14 @@ struct crypto_acomp {
struct crypto_tfm base;
};
+struct crypto_acomp_stream {
+ spinlock_t lock;
+ void *ctx;
+};
+
#define COMP_ALG_COMMON { \
struct crypto_alg base; \
+ struct crypto_acomp_stream __percpu *stream; \
}
struct comp_alg_common COMP_ALG_COMMON;
@@ -173,7 +183,16 @@ static inline int crypto_has_acomp(const char *alg_name, u32 type, u32 mask)
*
* Return: allocated handle in case of success or NULL in case of an error
*/
-struct acomp_req *acomp_request_alloc(struct crypto_acomp *tfm);
+static inline struct acomp_req *acomp_request_alloc_noprof(struct crypto_acomp *tfm)
+{
+ struct acomp_req *req;
+
+ req = kzalloc_noprof(sizeof(*req) + crypto_acomp_reqsize(tfm), GFP_KERNEL);
+ if (likely(req))
+ acomp_request_set_tfm(req, tfm);
+ return req;
+}
+#define acomp_request_alloc(...) alloc_hooks(acomp_request_alloc_noprof(__VA_ARGS__))
/**
* acomp_request_free() -- zeroize and free asynchronous (de)compression
@@ -182,7 +201,10 @@ struct acomp_req *acomp_request_alloc(struct crypto_acomp *tfm);
*
* @req: request to free
*/
-void acomp_request_free(struct acomp_req *req);
+static inline void acomp_request_free(struct acomp_req *req)
+{
+ kfree_sensitive(req);
+}
/**
* acomp_request_set_callback() -- Sets an asynchronous callback
diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
index 8831edaafc05..4a8f7e3beaa1 100644
--- a/include/crypto/internal/acompress.h
+++ b/include/crypto/internal/acompress.h
@@ -32,6 +32,7 @@
*
* @reqsize: Context size for (de)compression requests
* @base: Common crypto API algorithm data structure
+ * @stream: Per-cpu memory for algorithm
* @calg: Cmonn algorithm data structure shared with scomp
*/
struct acomp_alg {
@@ -68,22 +69,6 @@ static inline void acomp_request_complete(struct acomp_req *req,
crypto_request_complete(&req->base, err);
}
-static inline struct acomp_req *__acomp_request_alloc_noprof(struct crypto_acomp *tfm)
-{
- struct acomp_req *req;
-
- req = kzalloc_noprof(sizeof(*req) + crypto_acomp_reqsize(tfm), GFP_KERNEL);
- if (likely(req))
- acomp_request_set_tfm(req, tfm);
- return req;
-}
-#define __acomp_request_alloc(...) alloc_hooks(__acomp_request_alloc_noprof(__VA_ARGS__))
-
-static inline void __acomp_request_free(struct acomp_req *req)
-{
- kfree_sensitive(req);
-}
-
/**
* crypto_register_acomp() -- Register asynchronous compression algorithm
*
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 6ba9974df7d3..88986ab8ce15 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -28,6 +28,7 @@ struct crypto_scomp {
* @compress: Function performs a compress operation
* @decompress: Function performs a de-compress operation
* @base: Common crypto API algorithm data structure
+ * @stream: Per-cpu memory for algorithm
* @calg: Cmonn algorithm data structure shared with acomp
*/
struct scomp_alg {
@@ -71,17 +72,6 @@ static inline struct scomp_alg *crypto_scomp_alg(struct crypto_scomp *tfm)
return __crypto_scomp_alg(crypto_scomp_tfm(tfm)->__crt_alg);
}
-static inline void *crypto_scomp_alloc_ctx(struct crypto_scomp *tfm)
-{
- return crypto_scomp_alg(tfm)->alloc_ctx();
-}
-
-static inline void crypto_scomp_free_ctx(struct crypto_scomp *tfm,
- void *ctx)
-{
- return crypto_scomp_alg(tfm)->free_ctx(ctx);
-}
-
static inline int crypto_scomp_compress(struct crypto_scomp *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 4/8] crypto: scomp - Disable BH when taking per-cpu spin lock
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
` (2 preceding siblings ...)
2025-03-09 2:43 ` [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
` (3 subsequent siblings)
7 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
Disable BH when taking per-cpu spin locks. This isn't an issue
right now because the only user zswap calls scomp from process
context. However, if scomp is called from softirq context the
spin lock may dead-lock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/scompress.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 9b6d9bbbc73a..a2ce481a10bb 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -182,7 +182,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
dlen = req->dlen;
scratch = raw_cpu_ptr(&scomp_scratch);
- spin_lock(&scratch->lock);
+ spin_lock_bh(&scratch->lock);
if (sg_nents(req->src) == 1 && !PageHighMem(sg_page(req->src))) {
src = page_to_virt(sg_page(req->src)) + req->src->offset;
@@ -230,7 +230,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
}
}
out:
- spin_unlock(&scratch->lock);
+ spin_unlock_bh(&scratch->lock);
return ret;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
` (3 preceding siblings ...)
2025-03-09 2:43 ` [v3 PATCH 4/8] crypto: scomp - Disable BH when taking per-cpu spin lock Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-16 4:49 ` Eric Biggers
2025-03-20 17:24 ` Cabiddu, Giovanni
2025-03-09 2:43 ` [v3 PATCH 6/8] crypto: testmgr - Remove NULL dst acomp tests Herbert Xu
` (2 subsequent siblings)
7 siblings, 2 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
This adds request chaining and virtual address support to the
acomp interface.
It is identical to the ahash interface, except that a new flag
CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
virtual addresses are not suitable for DMA. This is because
all existing and potential acomp users can provide memory that
is suitable for DMA so there is no need for a fall-back copy
path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/acompress.c | 197 +++++++++++++++++++++++++++
include/crypto/acompress.h | 198 ++++++++++++++++++++++++++--
include/crypto/internal/acompress.h | 42 ++++++
3 files changed, 424 insertions(+), 13 deletions(-)
diff --git a/crypto/acompress.c b/crypto/acompress.c
index ef36ec31d73d..45444e99a9db 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -23,6 +23,8 @@ struct crypto_scomp;
static const struct crypto_type crypto_acomp_type;
+static void acomp_reqchain_done(void *data, int err);
+
static inline struct acomp_alg *__crypto_acomp_alg(struct crypto_alg *alg)
{
return container_of(alg, struct acomp_alg, calg.base);
@@ -123,6 +125,201 @@ struct crypto_acomp *crypto_alloc_acomp_node(const char *alg_name, u32 type,
}
EXPORT_SYMBOL_GPL(crypto_alloc_acomp_node);
+static bool acomp_request_has_nondma(struct acomp_req *req)
+{
+ struct acomp_req *r2;
+
+ if (acomp_request_isnondma(req))
+ return true;
+
+ list_for_each_entry(r2, &req->base.list, base.list)
+ if (acomp_request_isnondma(r2))
+ return true;
+
+ return false;
+}
+
+static void acomp_save_req(struct acomp_req *req, crypto_completion_t cplt)
+{
+ struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+ struct acomp_req_chain *state = &req->chain;
+
+ if (!acomp_is_async(tfm))
+ return;
+
+ state->compl = req->base.complete;
+ state->data = req->base.data;
+ req->base.complete = cplt;
+ req->base.data = state;
+ state->req0 = req;
+}
+
+static void acomp_restore_req(struct acomp_req_chain *state)
+{
+ struct acomp_req *req = state->req0;
+ struct crypto_acomp *tfm;
+
+ tfm = crypto_acomp_reqtfm(req);
+ if (!acomp_is_async(tfm))
+ return;
+
+ req->base.complete = state->compl;
+ req->base.data = state->data;
+}
+
+static void acomp_reqchain_virt(struct acomp_req_chain *state, int err)
+{
+ struct acomp_req *req = state->cur;
+ unsigned int slen = req->slen;
+ unsigned int dlen = req->dlen;
+
+ req->base.err = err;
+ state = &req->chain;
+
+ if (state->src)
+ acomp_request_set_src_dma(req, state->src, slen);
+ if (state->dst)
+ acomp_request_set_dst_dma(req, state->dst, dlen);
+ state->src = NULL;
+ state->dst = NULL;
+}
+
+static void acomp_virt_to_sg(struct acomp_req *req)
+{
+ struct acomp_req_chain *state = &req->chain;
+
+ if (acomp_request_src_isvirt(req)) {
+ unsigned int slen = req->slen;
+ const u8 *svirt = req->svirt;
+
+ state->src = svirt;
+ sg_init_one(&state->ssg, svirt, slen);
+ acomp_request_set_src_sg(req, &state->ssg, slen);
+ }
+
+ if (acomp_request_dst_isvirt(req)) {
+ unsigned int dlen = req->dlen;
+ u8 *dvirt = req->dvirt;
+
+ state->dst = dvirt;
+ sg_init_one(&state->dsg, dvirt, dlen);
+ acomp_request_set_dst_sg(req, &state->dsg, dlen);
+ }
+}
+
+static int acomp_reqchain_finish(struct acomp_req_chain *state,
+ int err, u32 mask)
+{
+ struct acomp_req *req0 = state->req0;
+ struct acomp_req *req = state->cur;
+ struct acomp_req *n;
+
+ acomp_reqchain_virt(state, err);
+
+ if (req != req0)
+ list_add_tail(&req->base.list, &req0->base.list);
+
+ list_for_each_entry_safe(req, n, &state->head, base.list) {
+ list_del_init(&req->base.list);
+
+ req->base.flags &= mask;
+ req->base.complete = acomp_reqchain_done;
+ req->base.data = state;
+ state->cur = req;
+
+ acomp_virt_to_sg(req);
+ err = state->op(req);
+
+ if (err == -EINPROGRESS) {
+ if (!list_empty(&state->head))
+ err = -EBUSY;
+ goto out;
+ }
+
+ if (err == -EBUSY)
+ goto out;
+
+ acomp_reqchain_virt(state, err);
+ list_add_tail(&req->base.list, &req0->base.list);
+ }
+
+ acomp_restore_req(state);
+
+out:
+ return err;
+}
+
+static void acomp_reqchain_done(void *data, int err)
+{
+ struct acomp_req_chain *state = data;
+ crypto_completion_t compl = state->compl;
+
+ data = state->data;
+
+ if (err == -EINPROGRESS) {
+ if (!list_empty(&state->head))
+ return;
+ goto notify;
+ }
+
+ err = acomp_reqchain_finish(state, err, CRYPTO_TFM_REQ_MAY_BACKLOG);
+ if (err == -EBUSY)
+ return;
+
+notify:
+ compl(data, err);
+}
+
+static int acomp_do_req_chain(struct acomp_req *req,
+ int (*op)(struct acomp_req *req))
+{
+ struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
+ struct acomp_req_chain *state = &req->chain;
+ int err;
+
+ if (crypto_acomp_req_chain(tfm) ||
+ (!acomp_request_chained(req) && !acomp_request_isvirt(req)))
+ return op(req);
+
+ /*
+ * There are no in-kernel users that do this. If and ever
+ * such users come into being then we could add a fall-back
+ * path.
+ */
+ if (acomp_request_has_nondma(req))
+ return -EINVAL;
+
+ if (acomp_is_async(tfm)) {
+ acomp_save_req(req, acomp_reqchain_done);
+ state = req->base.data;
+ }
+
+ state->op = op;
+ state->cur = req;
+ state->src = NULL;
+ INIT_LIST_HEAD(&state->head);
+ list_splice_init(&req->base.list, &state->head);
+
+ acomp_virt_to_sg(req);
+ err = op(req);
+ if (err == -EBUSY || err == -EINPROGRESS)
+ return -EBUSY;
+
+ return acomp_reqchain_finish(state, err, ~0);
+}
+
+int crypto_acomp_compress(struct acomp_req *req)
+{
+ return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)->compress);
+}
+EXPORT_SYMBOL_GPL(crypto_acomp_compress);
+
+int crypto_acomp_decompress(struct acomp_req *req)
+{
+ return acomp_do_req_chain(req, crypto_acomp_reqtfm(req)->decompress);
+}
+EXPORT_SYMBOL_GPL(crypto_acomp_decompress);
+
void comp_prepare_alg(struct comp_alg_common *alg)
{
struct crypto_alg *base = &alg->base;
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index c4937709ad0e..c4d8a29274c6 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -13,13 +13,42 @@
#include <linux/compiler_types.h>
#include <linux/container_of.h>
#include <linux/crypto.h>
+#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/spinlock_types.h>
#include <linux/types.h>
#define CRYPTO_ACOMP_ALLOC_OUTPUT 0x00000001
+
+/* Set this bit if source is virtual address instead of SG list. */
+#define CRYPTO_ACOMP_REQ_SRC_VIRT 0x00000002
+
+/* Set this bit for if virtual address source cannot be used for DMA. */
+#define CRYPTO_ACOMP_REQ_SRC_NONDMA 0x00000004
+
+/* Set this bit if destination is virtual address instead of SG list. */
+#define CRYPTO_ACOMP_REQ_DST_VIRT 0x00000008
+
+/* Set this bit for if virtual address destination cannot be used for DMA. */
+#define CRYPTO_ACOMP_REQ_DST_NONDMA 0x00000010
+
#define CRYPTO_ACOMP_DST_MAX 131072
+struct acomp_req;
+
+struct acomp_req_chain {
+ struct list_head head;
+ struct acomp_req *req0;
+ struct acomp_req *cur;
+ int (*op)(struct acomp_req *req);
+ crypto_completion_t compl;
+ void *data;
+ struct scatterlist ssg;
+ struct scatterlist dsg;
+ const u8 *src;
+ u8 *dst;
+};
+
/**
* struct acomp_req - asynchronous (de)compression request
*
@@ -28,14 +57,24 @@
* @dst: Destination data
* @slen: Size of the input buffer
* @dlen: Size of the output buffer and number of bytes produced
+ * @chain: Private API code data, do not use
* @__ctx: Start of private context data
*/
struct acomp_req {
struct crypto_async_request base;
- struct scatterlist *src;
- struct scatterlist *dst;
+ union {
+ struct scatterlist *src;
+ const u8 *svirt;
+ };
+ union {
+ struct scatterlist *dst;
+ u8 *dvirt;
+ };
unsigned int slen;
unsigned int dlen;
+
+ struct acomp_req_chain chain;
+
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};
@@ -222,10 +261,16 @@ static inline void acomp_request_set_callback(struct acomp_req *req,
crypto_completion_t cmpl,
void *data)
{
+ u32 keep = CRYPTO_ACOMP_ALLOC_OUTPUT | CRYPTO_ACOMP_REQ_SRC_VIRT |
+ CRYPTO_ACOMP_REQ_SRC_NONDMA | CRYPTO_ACOMP_REQ_DST_VIRT |
+ CRYPTO_ACOMP_REQ_DST_NONDMA;
+
req->base.complete = cmpl;
req->base.data = data;
- req->base.flags &= CRYPTO_ACOMP_ALLOC_OUTPUT;
- req->base.flags |= flgs & ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+ req->base.flags &= keep;
+ req->base.flags |= flgs & ~keep;
+
+ crypto_reqchain_init(&req->base);
}
/**
@@ -252,11 +297,144 @@ static inline void acomp_request_set_params(struct acomp_req *req,
req->slen = slen;
req->dlen = dlen;
- req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+ req->base.flags &= ~(CRYPTO_ACOMP_ALLOC_OUTPUT |
+ CRYPTO_ACOMP_REQ_SRC_VIRT |
+ CRYPTO_ACOMP_REQ_SRC_NONDMA |
+ CRYPTO_ACOMP_REQ_DST_VIRT |
+ CRYPTO_ACOMP_REQ_DST_NONDMA);
if (!req->dst)
req->base.flags |= CRYPTO_ACOMP_ALLOC_OUTPUT;
}
+/**
+ * acomp_request_set_src_sg() -- Sets source scatterlist
+ *
+ * Sets source scatterlist required by an acomp operation.
+ *
+ * @req: asynchronous compress request
+ * @src: pointer to input buffer scatterlist
+ * @slen: size of the input buffer
+ */
+static inline void acomp_request_set_src_sg(struct acomp_req *req,
+ struct scatterlist *src,
+ unsigned int slen)
+{
+ req->src = src;
+ req->slen = slen;
+
+ req->base.flags &= ~CRYPTO_ACOMP_REQ_SRC_NONDMA;
+ req->base.flags &= ~CRYPTO_ACOMP_REQ_SRC_VIRT;
+}
+
+/**
+ * acomp_request_set_src_dma() -- Sets DMA source virtual address
+ *
+ * Sets source virtual address required by an acomp operation.
+ * The address must be usable for DMA.
+ *
+ * @req: asynchronous compress request
+ * @src: virtual address pointer to input buffer
+ * @slen: size of the input buffer
+ */
+static inline void acomp_request_set_src_dma(struct acomp_req *req,
+ const u8 *src, unsigned int slen)
+{
+ req->svirt = src;
+ req->slen = slen;
+
+ req->base.flags &= ~CRYPTO_ACOMP_REQ_SRC_NONDMA;
+ req->base.flags |= CRYPTO_ACOMP_REQ_SRC_VIRT;
+}
+
+/**
+ * acomp_request_set_src_nondma() -- Sets non-DMA source virtual address
+ *
+ * Sets source virtual address required by an acomp operation.
+ * The address can not be used for DMA.
+ *
+ * @req: asynchronous compress request
+ * @src: virtual address pointer to input buffer
+ * @slen: size of the input buffer
+ */
+static inline void acomp_request_set_src_nondma(struct acomp_req *req,
+ const u8 *src,
+ unsigned int slen)
+{
+ req->svirt = src;
+ req->slen = slen;
+
+ req->base.flags |= CRYPTO_ACOMP_REQ_SRC_NONDMA;
+ req->base.flags |= CRYPTO_ACOMP_REQ_SRC_VIRT;
+}
+
+/**
+ * acomp_request_set_dst_sg() -- Sets destination scatterlist
+ *
+ * Sets destination scatterlist required by an acomp operation.
+ *
+ * @req: asynchronous compress request
+ * @dst: pointer to output buffer scatterlist
+ * @dlen: size of the output buffer
+ */
+static inline void acomp_request_set_dst_sg(struct acomp_req *req,
+ struct scatterlist *dst,
+ unsigned int dlen)
+{
+ req->dst = dst;
+ req->dlen = dlen;
+
+ req->base.flags &= ~CRYPTO_ACOMP_REQ_DST_NONDMA;
+ req->base.flags &= ~CRYPTO_ACOMP_REQ_DST_VIRT;
+}
+
+/**
+ * acomp_request_set_dst_dma() -- Sets DMA destination virtual address
+ *
+ * Sets destination virtual address required by an acomp operation.
+ * The address must be usable for DMA.
+ *
+ * @req: asynchronous compress request
+ * @dst: virtual address pointer to output buffer
+ * @dlen: size of the output buffer
+ */
+static inline void acomp_request_set_dst_dma(struct acomp_req *req,
+ u8 *dst, unsigned int dlen)
+{
+ req->dvirt = dst;
+ req->dlen = dlen;
+
+ req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+ req->base.flags &= ~CRYPTO_ACOMP_REQ_DST_NONDMA;
+ req->base.flags |= CRYPTO_ACOMP_REQ_DST_VIRT;
+}
+
+/**
+ * acomp_request_set_dst_nondma() -- Sets non-DMA destination virtual address
+ *
+ * Sets destination virtual address required by an acomp operation.
+ * The address can not be used for DMA.
+ *
+ * @req: asynchronous compress request
+ * @dst: virtual address pointer to output buffer
+ * @dlen: size of the output buffer
+ */
+static inline void acomp_request_set_dst_nondma(struct acomp_req *req,
+ u8 *dst, unsigned int dlen)
+{
+ req->dvirt = dst;
+ req->dlen = dlen;
+
+ req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
+ req->base.flags |= CRYPTO_ACOMP_REQ_DST_NONDMA;
+ req->base.flags |= CRYPTO_ACOMP_REQ_DST_VIRT;
+}
+
+static inline void acomp_request_chain(struct acomp_req *req,
+ struct acomp_req *head)
+{
+ crypto_request_chain(&req->base, &head->base);
+}
+
/**
* crypto_acomp_compress() -- Invoke asynchronous compress operation
*
@@ -266,10 +444,7 @@ static inline void acomp_request_set_params(struct acomp_req *req,
*
* Return: zero on success; error code in case of error
*/
-static inline int crypto_acomp_compress(struct acomp_req *req)
-{
- return crypto_acomp_reqtfm(req)->compress(req);
-}
+int crypto_acomp_compress(struct acomp_req *req);
/**
* crypto_acomp_decompress() -- Invoke asynchronous decompress operation
@@ -280,9 +455,6 @@ static inline int crypto_acomp_compress(struct acomp_req *req)
*
* Return: zero on success; error code in case of error
*/
-static inline int crypto_acomp_decompress(struct acomp_req *req)
-{
- return crypto_acomp_reqtfm(req)->decompress(req);
-}
+int crypto_acomp_decompress(struct acomp_req *req);
#endif
diff --git a/include/crypto/internal/acompress.h b/include/crypto/internal/acompress.h
index 4a8f7e3beaa1..957a5ed7c7f1 100644
--- a/include/crypto/internal/acompress.h
+++ b/include/crypto/internal/acompress.h
@@ -94,4 +94,46 @@ void crypto_unregister_acomp(struct acomp_alg *alg);
int crypto_register_acomps(struct acomp_alg *algs, int count);
void crypto_unregister_acomps(struct acomp_alg *algs, int count);
+static inline bool acomp_request_chained(struct acomp_req *req)
+{
+ return crypto_request_chained(&req->base);
+}
+
+static inline bool acomp_request_src_isvirt(struct acomp_req *req)
+{
+ return req->base.flags & CRYPTO_ACOMP_REQ_SRC_VIRT;
+}
+
+static inline bool acomp_request_dst_isvirt(struct acomp_req *req)
+{
+ return req->base.flags & CRYPTO_ACOMP_REQ_DST_VIRT;
+}
+
+static inline bool acomp_request_isvirt(struct acomp_req *req)
+{
+ return req->base.flags & (CRYPTO_ACOMP_REQ_SRC_VIRT |
+ CRYPTO_ACOMP_REQ_DST_VIRT);
+}
+
+static inline bool acomp_request_src_isnondma(struct acomp_req *req)
+{
+ return req->base.flags & CRYPTO_ACOMP_REQ_SRC_NONDMA;
+}
+
+static inline bool acomp_request_dst_isnondma(struct acomp_req *req)
+{
+ return req->base.flags & CRYPTO_ACOMP_REQ_DST_NONDMA;
+}
+
+static inline bool acomp_request_isnondma(struct acomp_req *req)
+{
+ return req->base.flags & (CRYPTO_ACOMP_REQ_SRC_NONDMA |
+ CRYPTO_ACOMP_REQ_DST_NONDMA);
+}
+
+static inline bool crypto_acomp_req_chain(struct crypto_acomp *tfm)
+{
+ return crypto_tfm_req_chain(&tfm->base);
+}
+
#endif
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 6/8] crypto: testmgr - Remove NULL dst acomp tests
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
` (4 preceding siblings ...)
2025-03-09 2:43 ` [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 8/8] crypto: scomp - Add chaining and virtual address support Herbert Xu
7 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
In preparation for the partial removal of NULL dst acomp support,
remove the tests for them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/testmgr.c | 29 -----------------------------
1 file changed, 29 deletions(-)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index cbce769b16ef..140872765dcd 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3522,21 +3522,6 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}
-#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
- crypto_init_wait(&wait);
- sg_init_one(&src, input_vec, ilen);
- acomp_request_set_params(req, &src, NULL, ilen, 0);
-
- ret = crypto_wait_req(crypto_acomp_compress(req), &wait);
- if (ret) {
- pr_err("alg: acomp: compression failed on NULL dst buffer test %d for %s: ret=%d\n",
- i + 1, algo, -ret);
- kfree(input_vec);
- acomp_request_free(req);
- goto out;
- }
-#endif
-
kfree(input_vec);
acomp_request_free(req);
}
@@ -3598,20 +3583,6 @@ static int test_acomp(struct crypto_acomp *tfm,
goto out;
}
-#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
- crypto_init_wait(&wait);
- acomp_request_set_params(req, &src, NULL, ilen, 0);
-
- ret = crypto_wait_req(crypto_acomp_decompress(req), &wait);
- if (ret) {
- pr_err("alg: acomp: decompression failed on NULL dst buffer test %d for %s: ret=%d\n",
- i + 1, algo, -ret);
- kfree(input_vec);
- acomp_request_free(req);
- goto out;
- }
-#endif
-
kfree(input_vec);
acomp_request_free(req);
}
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
` (5 preceding siblings ...)
2025-03-09 2:43 ` [v3 PATCH 6/8] crypto: testmgr - Remove NULL dst acomp tests Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
2025-03-10 19:31 ` Dan Carpenter
2025-03-09 2:43 ` [v3 PATCH 8/8] crypto: scomp - Add chaining and virtual address support Herbert Xu
7 siblings, 1 reply; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
As the only user of acomp/scomp uses a trivial single-page SG
list, remove support for everything else in preprataion for the
addition of virtual address support.
However, keep support for non-trivial source SG lists as that
user is currently jumping through hoops in order to linearise
the source data.
Limit the source SG linearisation buffer to a single page as
that user never goes over that. The only other potential user
is also unlikely to exceed that (IPComp) and it can easily do
its own linearisation if necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/acompress.c | 1 -
crypto/scompress.c | 95 ++++++++++++-----------------
include/crypto/acompress.h | 17 +-----
include/crypto/internal/scompress.h | 2 -
4 files changed, 41 insertions(+), 74 deletions(-)
diff --git a/crypto/acompress.c b/crypto/acompress.c
index 45444e99a9db..194a4b36f97f 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -73,7 +73,6 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
acomp->compress = alg->compress;
acomp->decompress = alg->decompress;
- acomp->dst_free = alg->dst_free;
acomp->reqsize = alg->reqsize;
if (alg->exit)
diff --git a/crypto/scompress.c b/crypto/scompress.c
index a2ce481a10bb..1f7426c6d85a 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -18,15 +18,18 @@
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/string.h>
-#include <linux/vmalloc.h>
#include <net/netlink.h>
#include "compress.h"
+#define SCOMP_SCRATCH_SIZE PAGE_SIZE
+
struct scomp_scratch {
spinlock_t lock;
- void *src;
- void *dst;
+ union {
+ void *src;
+ unsigned long saddr;
+ };
};
static DEFINE_PER_CPU(struct scomp_scratch, scomp_scratch) = {
@@ -66,10 +69,8 @@ static void crypto_scomp_free_scratches(void)
for_each_possible_cpu(i) {
scratch = per_cpu_ptr(&scomp_scratch, i);
- vfree(scratch->src);
- vfree(scratch->dst);
+ free_page(scratch->saddr);
scratch->src = NULL;
- scratch->dst = NULL;
}
}
@@ -79,18 +80,14 @@ static int crypto_scomp_alloc_scratches(void)
int i;
for_each_possible_cpu(i) {
- void *mem;
+ unsigned long mem;
scratch = per_cpu_ptr(&scomp_scratch, i);
- mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
+ mem = __get_free_page(GFP_KERNEL);
if (!mem)
goto error;
- scratch->src = mem;
- mem = vmalloc_node(SCOMP_SCRATCH_SIZE, cpu_to_node(i));
- if (!mem)
- goto error;
- scratch->dst = mem;
+ scratch->saddr = mem;
}
return 0;
error:
@@ -162,40 +159,43 @@ static int crypto_scomp_init_tfm(struct crypto_tfm *tfm)
static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
{
struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
- void **tfm_ctx = acomp_tfm_ctx(tfm);
+ struct crypto_scomp **tfm_ctx = acomp_tfm_ctx(tfm);
struct crypto_scomp *scomp = *tfm_ctx;
struct crypto_acomp_stream *stream;
struct scomp_scratch *scratch;
+ unsigned int slen = req->slen;
+ unsigned int dlen = req->dlen;
void *src, *dst;
- unsigned int dlen;
int ret;
- if (!req->src || !req->slen || req->slen > SCOMP_SCRATCH_SIZE)
+ if (!req->src || !slen)
return -EINVAL;
- if (req->dst && !req->dlen)
+ if (req->dst && !dlen)
return -EINVAL;
- if (!req->dlen || req->dlen > SCOMP_SCRATCH_SIZE)
- req->dlen = SCOMP_SCRATCH_SIZE;
+ if (sg_nents(req->dst) > 1)
+ return -ENOSYS;
- dlen = req->dlen;
+ if (req->dst->offset >= PAGE_SIZE)
+ return -ENOSYS;
+
+ if (req->dst->offset + dlen > PAGE_SIZE)
+ dlen = PAGE_SIZE - req->dst->offset;
+
+ if (sg_nents(req->src) == 1 && (!PageHighMem(sg_page(req->src)) ||
+ req->src->offset + slen <= PAGE_SIZE))
+ src = kmap_local_page(sg_page(req->src)) + req->src->offset;
+ else
+ src = scratch->src;
+
+ dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
scratch = raw_cpu_ptr(&scomp_scratch);
spin_lock_bh(&scratch->lock);
- if (sg_nents(req->src) == 1 && !PageHighMem(sg_page(req->src))) {
- src = page_to_virt(sg_page(req->src)) + req->src->offset;
- } else {
- scatterwalk_map_and_copy(scratch->src, req->src, 0,
- req->slen, 0);
- src = scratch->src;
- }
-
- if (req->dst && sg_nents(req->dst) == 1 && !PageHighMem(sg_page(req->dst)))
- dst = page_to_virt(sg_page(req->dst)) + req->dst->offset;
- else
- dst = scratch->dst;
+ if (src == scratch->src)
+ memcpy_from_sglist(src, req->src, 0, req->slen);
stream = raw_cpu_ptr(crypto_scomp_alg(scomp)->stream);
spin_lock(&stream->lock);
@@ -206,31 +206,13 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
ret = crypto_scomp_decompress(scomp, src, req->slen,
dst, &req->dlen, stream->ctx);
spin_unlock(&stream->lock);
- if (!ret) {
- if (!req->dst) {
- req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL);
- if (!req->dst) {
- ret = -ENOMEM;
- goto out;
- }
- } else if (req->dlen > dlen) {
- ret = -ENOSPC;
- goto out;
- }
- if (dst == scratch->dst) {
- scatterwalk_map_and_copy(scratch->dst, req->dst, 0,
- req->dlen, 1);
- } else {
- int nr_pages = DIV_ROUND_UP(req->dst->offset + req->dlen, PAGE_SIZE);
- int i;
- struct page *dst_page = sg_page(req->dst);
-
- for (i = 0; i < nr_pages; i++)
- flush_dcache_page(dst_page + i);
- }
- }
-out:
spin_unlock_bh(&scratch->lock);
+
+ if (src != scratch->src)
+ kunmap_local(src);
+ kunmap_local(dst);
+ flush_dcache_page(sg_page(req->dst));
+
return ret;
}
@@ -277,7 +259,6 @@ int crypto_init_scomp_ops_async(struct crypto_tfm *tfm)
crt->compress = scomp_acomp_compress;
crt->decompress = scomp_acomp_decompress;
- crt->dst_free = sgl_free;
return 0;
}
diff --git a/include/crypto/acompress.h b/include/crypto/acompress.h
index c4d8a29274c6..53c9e632862b 100644
--- a/include/crypto/acompress.h
+++ b/include/crypto/acompress.h
@@ -18,8 +18,6 @@
#include <linux/spinlock_types.h>
#include <linux/types.h>
-#define CRYPTO_ACOMP_ALLOC_OUTPUT 0x00000001
-
/* Set this bit if source is virtual address instead of SG list. */
#define CRYPTO_ACOMP_REQ_SRC_VIRT 0x00000002
@@ -84,15 +82,12 @@ struct acomp_req {
*
* @compress: Function performs a compress operation
* @decompress: Function performs a de-compress operation
- * @dst_free: Frees destination buffer if allocated inside the
- * algorithm
* @reqsize: Context size for (de)compression requests
* @base: Common crypto API algorithm data structure
*/
struct crypto_acomp {
int (*compress)(struct acomp_req *req);
int (*decompress)(struct acomp_req *req);
- void (*dst_free)(struct scatterlist *dst);
unsigned int reqsize;
struct crypto_tfm base;
};
@@ -261,9 +256,8 @@ static inline void acomp_request_set_callback(struct acomp_req *req,
crypto_completion_t cmpl,
void *data)
{
- u32 keep = CRYPTO_ACOMP_ALLOC_OUTPUT | CRYPTO_ACOMP_REQ_SRC_VIRT |
- CRYPTO_ACOMP_REQ_SRC_NONDMA | CRYPTO_ACOMP_REQ_DST_VIRT |
- CRYPTO_ACOMP_REQ_DST_NONDMA;
+ u32 keep = CRYPTO_ACOMP_REQ_SRC_VIRT | CRYPTO_ACOMP_REQ_SRC_NONDMA |
+ CRYPTO_ACOMP_REQ_DST_VIRT | CRYPTO_ACOMP_REQ_DST_NONDMA;
req->base.complete = cmpl;
req->base.data = data;
@@ -297,13 +291,10 @@ static inline void acomp_request_set_params(struct acomp_req *req,
req->slen = slen;
req->dlen = dlen;
- req->base.flags &= ~(CRYPTO_ACOMP_ALLOC_OUTPUT |
- CRYPTO_ACOMP_REQ_SRC_VIRT |
+ req->base.flags &= ~(CRYPTO_ACOMP_REQ_SRC_VIRT |
CRYPTO_ACOMP_REQ_SRC_NONDMA |
CRYPTO_ACOMP_REQ_DST_VIRT |
CRYPTO_ACOMP_REQ_DST_NONDMA);
- if (!req->dst)
- req->base.flags |= CRYPTO_ACOMP_ALLOC_OUTPUT;
}
/**
@@ -403,7 +394,6 @@ static inline void acomp_request_set_dst_dma(struct acomp_req *req,
req->dvirt = dst;
req->dlen = dlen;
- req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
req->base.flags &= ~CRYPTO_ACOMP_REQ_DST_NONDMA;
req->base.flags |= CRYPTO_ACOMP_REQ_DST_VIRT;
}
@@ -424,7 +414,6 @@ static inline void acomp_request_set_dst_nondma(struct acomp_req *req,
req->dvirt = dst;
req->dlen = dlen;
- req->base.flags &= ~CRYPTO_ACOMP_ALLOC_OUTPUT;
req->base.flags |= CRYPTO_ACOMP_REQ_DST_NONDMA;
req->base.flags |= CRYPTO_ACOMP_REQ_DST_VIRT;
}
diff --git a/include/crypto/internal/scompress.h b/include/crypto/internal/scompress.h
index 88986ab8ce15..f25aa2ea3b48 100644
--- a/include/crypto/internal/scompress.h
+++ b/include/crypto/internal/scompress.h
@@ -12,8 +12,6 @@
#include <crypto/acompress.h>
#include <crypto/algapi.h>
-#define SCOMP_SCRATCH_SIZE 131072
-
struct acomp_req;
struct crypto_scomp {
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [v3 PATCH 8/8] crypto: scomp - Add chaining and virtual address support
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
` (6 preceding siblings ...)
2025-03-09 2:43 ` [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
@ 2025-03-09 2:43 ` Herbert Xu
7 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-09 2:43 UTC (permalink / raw)
To: Linux Crypto Mailing List
Cc: Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
Add chaining and virtual address support to all scomp algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/scompress.c | 68 ++++++++++++++++++++++++++++++++++------------
1 file changed, 50 insertions(+), 18 deletions(-)
diff --git a/crypto/scompress.c b/crypto/scompress.c
index 1f7426c6d85a..c4336151dc84 100644
--- a/crypto/scompress.c
+++ b/crypto/scompress.c
@@ -165,7 +165,8 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
struct scomp_scratch *scratch;
unsigned int slen = req->slen;
unsigned int dlen = req->dlen;
- void *src, *dst;
+ const u8 *src;
+ u8 *dst;
int ret;
if (!req->src || !slen)
@@ -174,28 +175,33 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
if (req->dst && !dlen)
return -EINVAL;
- if (sg_nents(req->dst) > 1)
+ if (acomp_request_dst_isvirt(req))
+ dst = req->dvirt;
+ else if (sg_nents(req->dst) > 1)
return -ENOSYS;
-
- if (req->dst->offset >= PAGE_SIZE)
+ else if (req->dst->offset >= PAGE_SIZE)
return -ENOSYS;
+ else {
+ if (req->dst->offset + dlen > PAGE_SIZE)
+ dlen = PAGE_SIZE - req->dst->offset;
+ dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
+ }
- if (req->dst->offset + dlen > PAGE_SIZE)
- dlen = PAGE_SIZE - req->dst->offset;
+ scratch = raw_cpu_ptr(&scomp_scratch);
- if (sg_nents(req->src) == 1 && (!PageHighMem(sg_page(req->src)) ||
- req->src->offset + slen <= PAGE_SIZE))
+ if (acomp_request_src_isvirt(req))
+ src = req->svirt;
+ else if (sg_nents(req->src) == 1 &&
+ (!PageHighMem(sg_page(req->src)) ||
+ req->src->offset + slen <= PAGE_SIZE))
src = kmap_local_page(sg_page(req->src)) + req->src->offset;
else
src = scratch->src;
- dst = kmap_local_page(sg_page(req->dst)) + req->dst->offset;
-
- scratch = raw_cpu_ptr(&scomp_scratch);
spin_lock_bh(&scratch->lock);
if (src == scratch->src)
- memcpy_from_sglist(src, req->src, 0, req->slen);
+ memcpy_from_sglist(scratch->src, req->src, 0, req->slen);
stream = raw_cpu_ptr(crypto_scomp_alg(scomp)->stream);
spin_lock(&stream->lock);
@@ -208,22 +214,39 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
spin_unlock(&stream->lock);
spin_unlock_bh(&scratch->lock);
- if (src != scratch->src)
+ if (!acomp_request_src_isvirt(req) && src != scratch->src)
kunmap_local(src);
- kunmap_local(dst);
- flush_dcache_page(sg_page(req->dst));
+
+ if (!acomp_request_dst_isvirt(req)) {
+ kunmap_local(dst);
+ flush_dcache_page(sg_page(req->dst));
+ }
return ret;
}
+static int scomp_acomp_chain(struct acomp_req *req, int dir)
+{
+ struct acomp_req *r2;
+ int err;
+
+ err = scomp_acomp_comp_decomp(req, dir);
+ req->base.err = err;
+
+ list_for_each_entry(r2, &req->base.list, base.list)
+ r2->base.err = scomp_acomp_comp_decomp(r2, dir);
+
+ return err;
+}
+
static int scomp_acomp_compress(struct acomp_req *req)
{
- return scomp_acomp_comp_decomp(req, 1);
+ return scomp_acomp_chain(req, 1);
}
static int scomp_acomp_decompress(struct acomp_req *req)
{
- return scomp_acomp_comp_decomp(req, 0);
+ return scomp_acomp_chain(req, 0);
}
static void crypto_exit_scomp_ops_async(struct crypto_tfm *tfm)
@@ -284,12 +307,21 @@ static const struct crypto_type crypto_scomp_type = {
.tfmsize = offsetof(struct crypto_scomp, base),
};
-int crypto_register_scomp(struct scomp_alg *alg)
+static void scomp_prepare_alg(struct scomp_alg *alg)
{
struct crypto_alg *base = &alg->calg.base;
comp_prepare_alg(&alg->calg);
+ base->cra_flags |= CRYPTO_ALG_REQ_CHAIN;
+}
+
+int crypto_register_scomp(struct scomp_alg *alg)
+{
+ struct crypto_alg *base = &alg->calg.base;
+
+ scomp_prepare_alg(alg);
+
base->cra_type = &crypto_scomp_type;
base->cra_flags |= CRYPTO_ALG_TYPE_SCOMPRESS;
--
2.39.5
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists
2025-03-09 2:43 ` [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
@ 2025-03-10 19:31 ` Dan Carpenter
2025-03-11 3:13 ` Herbert Xu
0 siblings, 1 reply; 23+ messages in thread
From: Dan Carpenter @ 2025-03-10 19:31 UTC (permalink / raw)
To: oe-kbuild, Herbert Xu, Linux Crypto Mailing List
Cc: lkp, oe-kbuild-all, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
Hi Herbert,
kernel test robot noticed the following build warnings:
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Herbert-Xu/crypto-api-Add-cra_type-destroy-hook/20250309-104526
base: https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
patch link: https://lore.kernel.org/r/205f05023b5ff0d8cf7deb6e0a5fbb4643f02e00.1741488107.git.herbert%40gondor.apana.org.au
patch subject: [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists
config: um-randconfig-r072-20250310 (https://download.01.org/0day-ci/archive/20250311/202503110237.GjZvyi0K-lkp@intel.com/config)
compiler: clang version 21.0.0git (https://github.com/llvm/llvm-project e15545cad8297ec7555f26e5ae74a9f0511203e7)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
| Closes: https://lore.kernel.org/r/202503110237.GjZvyi0K-lkp@intel.com/
New smatch warnings:
crypto/scompress.c:180 scomp_acomp_comp_decomp() error: we previously assumed 'req->dst' could be null (see line 174)
vim +180 crypto/scompress.c
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 159 static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 160 {
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 161 struct crypto_acomp *tfm = crypto_acomp_reqtfm(req);
5b855462cc7e3f3 Herbert Xu 2025-03-09 162 struct crypto_scomp **tfm_ctx = acomp_tfm_ctx(tfm);
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 163 struct crypto_scomp *scomp = *tfm_ctx;
e77b9947333baa6 Herbert Xu 2025-03-09 164 struct crypto_acomp_stream *stream;
71052dcf4be70be Sebastian Andrzej Siewior 2019-03-29 165 struct scomp_scratch *scratch;
5b855462cc7e3f3 Herbert Xu 2025-03-09 166 unsigned int slen = req->slen;
5b855462cc7e3f3 Herbert Xu 2025-03-09 167 unsigned int dlen = req->dlen;
77292bb8ca69c80 Barry Song 2024-03-02 168 void *src, *dst;
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 169 int ret;
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 170
5b855462cc7e3f3 Herbert Xu 2025-03-09 171 if (!req->src || !slen)
71052dcf4be70be Sebastian Andrzej Siewior 2019-03-29 172 return -EINVAL;
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 173
5b855462cc7e3f3 Herbert Xu 2025-03-09 @174 if (req->dst && !dlen)
^^^^^^^^
Is this check necessary?
71052dcf4be70be Sebastian Andrzej Siewior 2019-03-29 175 return -EINVAL;
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 176
5b855462cc7e3f3 Herbert Xu 2025-03-09 177 if (sg_nents(req->dst) > 1)
5b855462cc7e3f3 Herbert Xu 2025-03-09 178 return -ENOSYS;
1ab53a77b772bf7 Giovanni Cabiddu 2016-10-21 179
5b855462cc7e3f3 Herbert Xu 2025-03-09 @180 if (req->dst->offset >= PAGE_SIZE)
^^^^^^^^^^
Unchecked dereference
5b855462cc7e3f3 Herbert Xu 2025-03-09 181 return -ENOSYS;
744e1885922a994 Chengming Zhou 2023-12-27 182
5b855462cc7e3f3 Herbert Xu 2025-03-09 183 if (req->dst->offset + dlen > PAGE_SIZE)
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists
2025-03-10 19:31 ` Dan Carpenter
@ 2025-03-11 3:13 ` Herbert Xu
2025-03-11 3:16 ` Herbert Xu
0 siblings, 1 reply; 23+ messages in thread
From: Herbert Xu @ 2025-03-11 3:13 UTC (permalink / raw)
To: Dan Carpenter
Cc: oe-kbuild, Linux Crypto Mailing List, lkp, oe-kbuild-all,
Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
On Mon, Mar 10, 2025 at 10:31:06PM +0300, Dan Carpenter wrote:
>
> New smatch warnings:
> crypto/scompress.c:180 scomp_acomp_comp_decomp() error: we previously assumed 'req->dst' could be null (see line 174)
I think this is a false positive.
> 5b855462cc7e3f3 Herbert Xu 2025-03-09 @174 if (req->dst && !dlen)
> ^^^^^^^^
> Is this check necessary?
This is not trying to catch a null req->dst, but it's trying to
detect an combination of a non-null req->dst with a zero dlen.
A zero dlen is used to allocate req->dst on demand, which would
conflict with a non-null req->dst.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists
2025-03-11 3:13 ` Herbert Xu
@ 2025-03-11 3:16 ` Herbert Xu
0 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-11 3:16 UTC (permalink / raw)
To: Dan Carpenter
Cc: oe-kbuild, Linux Crypto Mailing List, lkp, oe-kbuild-all,
Yosry Ahmed, Kanchana P Sridhar, Sergey Senozhatsky
On Tue, Mar 11, 2025 at 11:13:52AM +0800, Herbert Xu wrote:
>
> > 5b855462cc7e3f3 Herbert Xu 2025-03-09 @174 if (req->dst && !dlen)
> > ^^^^^^^^
> > Is this check necessary?
>
> This is not trying to catch a null req->dst, but it's trying to
> detect an combination of a non-null req->dst with a zero dlen.
>
> A zero dlen is used to allocate req->dst on demand, which would
> conflict with a non-null req->dst.
Actually I take that back. Yes this test should be removed as
it's a remnant of the NULL dst code which has no users.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer
2025-03-09 2:43 ` [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer Herbert Xu
@ 2025-03-16 4:36 ` Eric Biggers
2025-03-16 4:42 ` Herbert Xu
` (2 more replies)
0 siblings, 3 replies; 23+ messages in thread
From: Eric Biggers @ 2025-03-16 4:36 UTC (permalink / raw)
To: Herbert Xu
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sun, Mar 09, 2025 at 10:43:17AM +0800, Herbert Xu wrote:
> Rather than allocating the stream memoryin the request object,
> move it into a per-cpu buffer managed by scomp. This takes the
> stress off the user from having to manage large request objects
> and setting up their own per-cpu buffers in order to do so.
Well, except the workspace (which you seem to be calling a "stream" for some
reason) size depends heavily on the compression parameters, such as the maximum
input length and compression level. Zstd for example wants 1303288 (comp) +
95944 (decomp) with the parameters the crypto API is currently setting, but only
89848 + 95944 if it's properly configured with estimated_src_size=4096 which is
what most of the users actually want. So making this a per-algorithm property
is insufficiently flexible.
But of course there is also no guarantee that users want it to be per-"tfm"
either, let alone have a full set of per-CPU buffers. FWIW, this series makes
the kernel use an extra 40 MB of memory on my system if I enable
CONFIG_UBIFS_FS, which seems problematic.
I don't think the crypto API model works well for compression at all, TBH.
- Eric
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer
2025-03-16 4:36 ` Eric Biggers
@ 2025-03-16 4:42 ` Herbert Xu
2025-03-16 4:46 ` Herbert Xu
2025-03-16 4:44 ` Herbert Xu
2025-03-17 8:36 ` Herbert Xu
2 siblings, 1 reply; 23+ messages in thread
From: Herbert Xu @ 2025-03-16 4:42 UTC (permalink / raw)
To: Eric Biggers
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sat, Mar 15, 2025 at 09:36:31PM -0700, Eric Biggers wrote:
>
> But of course there is also no guarantee that users want it to be per-"tfm"
> either, let alone have a full set of per-CPU buffers. FWIW, this series makes
> the kernel use an extra 40 MB of memory on my system if I enable
> CONFIG_UBIFS_FS, which seems problematic.
The memory is allocated on first use, or am I missing something?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer
2025-03-16 4:36 ` Eric Biggers
2025-03-16 4:42 ` Herbert Xu
@ 2025-03-16 4:44 ` Herbert Xu
2025-03-17 8:36 ` Herbert Xu
2 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-16 4:44 UTC (permalink / raw)
To: Eric Biggers
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sat, Mar 15, 2025 at 09:36:31PM -0700, Eric Biggers wrote:
>
> Well, except the workspace (which you seem to be calling a "stream" for some
> reason) size depends heavily on the compression parameters, such as the maximum
> input length and compression level. Zstd for example wants 1303288 (comp) +
> 95944 (decomp) with the parameters the crypto API is currently setting, but only
> 89848 + 95944 if it's properly configured with estimated_src_size=4096 which is
> what most of the users actually want. So making this a per-algorithm property
> is insufficiently flexible.
We don't support parameters yet but yes this is something that will
have to be addressed when we add parameter support. Per-tfm for
non-standard parameters is probably the best bet in this case.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer
2025-03-16 4:42 ` Herbert Xu
@ 2025-03-16 4:46 ` Herbert Xu
0 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-16 4:46 UTC (permalink / raw)
To: Eric Biggers
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sun, Mar 16, 2025 at 12:42:48PM +0800, Herbert Xu wrote:
> On Sat, Mar 15, 2025 at 09:36:31PM -0700, Eric Biggers wrote:
> >
> > But of course there is also no guarantee that users want it to be per-"tfm"
> > either, let alone have a full set of per-CPU buffers. FWIW, this series makes
> > the kernel use an extra 40 MB of memory on my system if I enable
> > CONFIG_UBIFS_FS, which seems problematic.
>
> The memory is allocated on first use, or am I missing something?
Oh yes ubifs is unconditionally allocating a compressor, that should
be fixed.
Thanks,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-09 2:43 ` [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
@ 2025-03-16 4:49 ` Eric Biggers
2025-03-16 5:43 ` Herbert Xu
2025-03-20 17:24 ` Cabiddu, Giovanni
1 sibling, 1 reply; 23+ messages in thread
From: Eric Biggers @ 2025-03-16 4:49 UTC (permalink / raw)
To: Herbert Xu
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sun, Mar 09, 2025 at 10:43:21AM +0800, Herbert Xu wrote:
> This adds request chaining
As I've said before, this would be much better handled by a function that
explicitly takes multiple buffers, rather than changing the whole API to make
every request ambiguously actually be a whole list of requests (whose behavior
also differs from submitting them individually in undocumented ways).
And again, as usual for your submissions, this has no tests or documentation.
- Eric
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-16 4:49 ` Eric Biggers
@ 2025-03-16 5:43 ` Herbert Xu
2025-03-16 6:50 ` Eric Biggers
0 siblings, 1 reply; 23+ messages in thread
From: Herbert Xu @ 2025-03-16 5:43 UTC (permalink / raw)
To: Eric Biggers
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sat, Mar 15, 2025 at 09:49:37PM -0700, Eric Biggers wrote:
>
> As I've said before, this would be much better handled by a function that
> explicitly takes multiple buffers, rather than changing the whole API to make
> every request ambiguously actually be a whole list of requests (whose behavior
> also differs from submitting them individually in undocumented ways).
This is exactly how we handle GSO in the network stack. It's
always an sk_buff regardless of whether it's a batch or a single
one.
In fact I will be using that to handle GSO over IPsec so the
array-based interface that you proposed simply does not fit.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-16 5:43 ` Herbert Xu
@ 2025-03-16 6:50 ` Eric Biggers
0 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2025-03-16 6:50 UTC (permalink / raw)
To: Herbert Xu
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sun, Mar 16, 2025 at 01:43:49PM +0800, Herbert Xu wrote:
> On Sat, Mar 15, 2025 at 09:49:37PM -0700, Eric Biggers wrote:
> >
> > As I've said before, this would be much better handled by a function that
> > explicitly takes multiple buffers, rather than changing the whole API to make
> > every request ambiguously actually be a whole list of requests (whose behavior
> > also differs from submitting them individually in undocumented ways).
>
> This is exactly how we handle GSO in the network stack. It's
> always an sk_buff regardless of whether it's a batch or a single
> one.
We don't have to make the same mistakes again.
> In fact I will be using that to handle GSO over IPsec so the
> array-based interface that you proposed simply does not fit.
IPsec doesn't use compression, so I guess your comment above is about hashing.
But you are *still* ignoring all the reasons why it isn't useful for IPsec.
Nacked-by: Eric Biggers <ebiggers@kernel.org>
- Eric
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer
2025-03-16 4:36 ` Eric Biggers
2025-03-16 4:42 ` Herbert Xu
2025-03-16 4:44 ` Herbert Xu
@ 2025-03-17 8:36 ` Herbert Xu
2 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2025-03-17 8:36 UTC (permalink / raw)
To: Eric Biggers
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sat, Mar 15, 2025 at 09:36:31PM -0700, Eric Biggers wrote:
>
> either, let alone have a full set of per-CPU buffers. FWIW, this series makes
> the kernel use an extra 40 MB of memory on my system if I enable
> CONFIG_UBIFS_FS, which seems problematic.
This patch series should resolve the problems for ubifs, assuming
that you don't actually mount anything of course:
https://lore.kernel.org/linux-crypto/cover.1742200161.git.herbert@gondor.apana.org.au/T/#t
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-09 2:43 ` [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
2025-03-16 4:49 ` Eric Biggers
@ 2025-03-20 17:24 ` Cabiddu, Giovanni
2025-03-21 2:33 ` Herbert Xu
1 sibling, 1 reply; 23+ messages in thread
From: Cabiddu, Giovanni @ 2025-03-20 17:24 UTC (permalink / raw)
To: Herbert Xu
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Sun, Mar 09, 2025 at 10:43:21AM +0800, Herbert Xu wrote:
> This adds request chaining and virtual address support to the
> acomp interface.
>
> It is identical to the ahash interface, except that a new flag
> CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
> virtual addresses are not suitable for DMA. This is because
> all existing and potential acomp users can provide memory that
> is suitable for DMA so there is no need for a fall-back copy
> path.
...
> +static int acomp_reqchain_finish(struct acomp_req_chain *state,
> + int err, u32 mask)
> +{
> + struct acomp_req *req0 = state->req0;
> + struct acomp_req *req = state->cur;
> + struct acomp_req *n;
> +
> + acomp_reqchain_virt(state, err);
> +
> + if (req != req0)
I'm hitting a NULL pointer dereference at this point as req0 is NULL.
I'm using b67a02600372 ("crypto: acomp - Add request chaining and
virtual addresses") from your tree.
I wrote a simple test that chains a bunch of requests following the same
pattern in tcrypt for ahash.
Here is how I'm using the API high level. For now, I'm using the software
implementation of deflate (deflate-scomp):
tfm = crypto_alloc_acomp("deflate", 0, 0);
req0 = acomp_request_alloc(tfm);
req1 = acomp_request_alloc(tfm);
req2 = acomp_request_alloc(tfm);
acomp_request_set_params(req0, ...);
acomp_request_set_params(req1, ...);
acomp_request_set_params(req2, ...);
acomp_request_set_callback(req0, 0, crypto_req_done, &wait);
acomp_request_set_callback(req1, 0, NULL, NULL);
acomp_request_set_callback(req2, 0, NULL, NULL);
head = req0;
acomp_request_chain(req1, head);
acomp_request_chain(req2, head);
ret = crypto_acomp_compress(req0);$
...
Do you see anything wrong?
Do you have any documentation or a sample showing how to use these APIs?
Thanks,
--
Giovanni
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-20 17:24 ` Cabiddu, Giovanni
@ 2025-03-21 2:33 ` Herbert Xu
2025-03-24 9:39 ` Cabiddu, Giovanni
0 siblings, 1 reply; 23+ messages in thread
From: Herbert Xu @ 2025-03-21 2:33 UTC (permalink / raw)
To: Cabiddu, Giovanni
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Thu, Mar 20, 2025 at 05:24:57PM +0000, Cabiddu, Giovanni wrote:
>
> > +static int acomp_reqchain_finish(struct acomp_req_chain *state,
> > + int err, u32 mask)
> > +{
> > + struct acomp_req *req0 = state->req0;
> > + struct acomp_req *req = state->cur;
> > + struct acomp_req *n;
> > +
> > + acomp_reqchain_virt(state, err);
> > +
> > + if (req != req0)
> I'm hitting a NULL pointer dereference at this point as req0 is NULL.
Yes sorry, my testing was screwed up. When I thought I was testing
the chaining fallback path I was actually testing the shash
fall-through path.
I will resend these patches.
Thanks,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses
2025-03-21 2:33 ` Herbert Xu
@ 2025-03-24 9:39 ` Cabiddu, Giovanni
0 siblings, 0 replies; 23+ messages in thread
From: Cabiddu, Giovanni @ 2025-03-24 9:39 UTC (permalink / raw)
To: Herbert Xu
Cc: Linux Crypto Mailing List, Yosry Ahmed, Kanchana P Sridhar,
Sergey Senozhatsky
On Fri, Mar 21, 2025 at 10:33:36AM +0800, Herbert Xu wrote:
> On Thu, Mar 20, 2025 at 05:24:57PM +0000, Cabiddu, Giovanni wrote:
> >
> > > +static int acomp_reqchain_finish(struct acomp_req_chain *state,
> > > + int err, u32 mask)
> > > +{
> > > + struct acomp_req *req0 = state->req0;
> > > + struct acomp_req *req = state->cur;
> > > + struct acomp_req *n;
> > > +
> > > + acomp_reqchain_virt(state, err);
> > > +
> > > + if (req != req0)
> > I'm hitting a NULL pointer dereference at this point as req0 is NULL.
>
> Yes sorry, my testing was screwed up. When I thought I was testing
> the chaining fallback path I was actually testing the shash
> fall-through path.
No problem. The set fix that you sent, "Fix synchronous fallback path for
request chaining", resolved the issue.
Thanks,
--
Giovanni
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2025-03-24 9:40 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-09 2:43 [v3 PATCH 0/8] crypto: acomp - Add request chaining and virtual address support Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 1/8] crypto: api - Add cra_type->destroy hook Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 2/8] crypto: scomp - Remove tfm argument from alloc/free_ctx Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 3/8] crypto: acomp - Move stream management into scomp layer Herbert Xu
2025-03-16 4:36 ` Eric Biggers
2025-03-16 4:42 ` Herbert Xu
2025-03-16 4:46 ` Herbert Xu
2025-03-16 4:44 ` Herbert Xu
2025-03-17 8:36 ` Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 4/8] crypto: scomp - Disable BH when taking per-cpu spin lock Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 5/8] crypto: acomp - Add request chaining and virtual addresses Herbert Xu
2025-03-16 4:49 ` Eric Biggers
2025-03-16 5:43 ` Herbert Xu
2025-03-16 6:50 ` Eric Biggers
2025-03-20 17:24 ` Cabiddu, Giovanni
2025-03-21 2:33 ` Herbert Xu
2025-03-24 9:39 ` Cabiddu, Giovanni
2025-03-09 2:43 ` [v3 PATCH 6/8] crypto: testmgr - Remove NULL dst acomp tests Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 7/8] crypto: scomp - Remove support for most non-trivial destination SG lists Herbert Xu
2025-03-10 19:31 ` Dan Carpenter
2025-03-11 3:13 ` Herbert Xu
2025-03-11 3:16 ` Herbert Xu
2025-03-09 2:43 ` [v3 PATCH 8/8] crypto: scomp - Add chaining and virtual address support Herbert Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox