* [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous
@ 2015-02-27 19:35 Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 1/2] crypto: af_alg - Allow to link sgl Tadeusz Struk
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Tadeusz Struk @ 2015-02-27 19:35 UTC (permalink / raw)
To: herbert; +Cc: davem, linux-crypto, qat-linux
The way the algif_skcipher works currently is that on sendmsg/sendpage it
builds an sgl for the input data and then on read/recvmsg it sends the job
for encryption putting the user to sleep till the data is processed.
This way it can only handle one job at a given time.
To be able to fuly utilize the potential of existing crypto hardware
accelerators it is required to submit multiple jobs in asynchronously.
This series adds support for asynchronous operations for algif_skcipher.
First patch enables af_alg sgl to be linked.
Second patch implement asynch read for skcipher.
Resend with updates for 4.0
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
---
Tadeusz Struk (2):
crypto: af_alg - Allow to link sgl
crypto: algif - change algif_skcipher to be asynchronous
crypto/af_alg.c | 18 ++-
crypto/algif_skcipher.c | 308 ++++++++++++++++++++++++++++++++++++++++++++++-
include/crypto/if_alg.h | 4 -
3 files changed, 317 insertions(+), 13 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH RESEND 1/2] crypto: af_alg - Allow to link sgl
2015-02-27 19:35 [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
@ 2015-02-27 19:35 ` Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
2015-03-08 11:20 ` [PATCH RESEND 0/2] " Stephan Mueller
2 siblings, 0 replies; 9+ messages in thread
From: Tadeusz Struk @ 2015-02-27 19:35 UTC (permalink / raw)
To: herbert; +Cc: davem, linux-crypto, qat-linux
Allow to link af_alg sgls.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
---
crypto/af_alg.c | 18 +++++++++++++-----
include/crypto/if_alg.h | 4 +++-
2 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 7f8b7edc..26089d1 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -358,8 +358,8 @@ int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len)
npages = (off + n + PAGE_SIZE - 1) >> PAGE_SHIFT;
if (WARN_ON(npages == 0))
return -EINVAL;
-
- sg_init_table(sgl->sg, npages);
+ /* Add one extra for linking */
+ sg_init_table(sgl->sg, npages + 1);
for (i = 0, len = n; i < npages; i++) {
int plen = min_t(int, len, PAGE_SIZE - off);
@@ -369,18 +369,26 @@ int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len)
off = 0;
len -= plen;
}
+ sg_mark_end(sgl->sg + npages - 1);
+ sgl->npages = npages;
+
return n;
}
EXPORT_SYMBOL_GPL(af_alg_make_sg);
+void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct af_alg_sgl *sgl_new)
+{
+ sg_unmark_end(sgl_prev->sg + sgl_prev->npages - 1);
+ sg_chain(sgl_prev->sg, sgl_prev->npages + 1, sgl_new->sg);
+}
+EXPORT_SYMBOL(af_alg_link_sg);
+
void af_alg_free_sg(struct af_alg_sgl *sgl)
{
int i;
- i = 0;
- do {
+ for (i = 0; i < sgl->npages; i++)
put_page(sgl->pages[i]);
- } while (!sg_is_last(sgl->sg + (i++)));
}
EXPORT_SYMBOL_GPL(af_alg_free_sg);
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 178525e..018afb2 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -58,8 +58,9 @@ struct af_alg_type {
};
struct af_alg_sgl {
- struct scatterlist sg[ALG_MAX_PAGES];
+ struct scatterlist sg[ALG_MAX_PAGES + 1];
struct page *pages[ALG_MAX_PAGES];
+ unsigned int npages;
};
int af_alg_register_type(const struct af_alg_type *type);
@@ -70,6 +71,7 @@ int af_alg_accept(struct sock *sk, struct socket *newsock);
int af_alg_make_sg(struct af_alg_sgl *sgl, struct iov_iter *iter, int len);
void af_alg_free_sg(struct af_alg_sgl *sgl);
+void af_alg_link_sg(struct af_alg_sgl *sgl_prev, struct af_alg_sgl *sgl_new);
int af_alg_cmsg_send(struct msghdr *msg, struct af_alg_control *con);
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous
2015-02-27 19:35 [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 1/2] crypto: af_alg - Allow to link sgl Tadeusz Struk
@ 2015-02-27 19:35 ` Tadeusz Struk
2015-03-06 11:44 ` Herbert Xu
2015-03-08 11:20 ` [PATCH RESEND 0/2] " Stephan Mueller
2 siblings, 1 reply; 9+ messages in thread
From: Tadeusz Struk @ 2015-02-27 19:35 UTC (permalink / raw)
To: herbert; +Cc: davem, linux-crypto, qat-linux
The way the algif_skcipher works currently is that on sendmsg/sendpage it
builds an sgl for the input data and then on read/recvmsg it sends the job
for encryption putting the user to sleep till the data is processed.
This way it can only handle one job at a given time.
This patch changes it to be asynchronous by adding AIO support.
Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
---
crypto/algif_skcipher.c | 308 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 301 insertions(+), 7 deletions(-)
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 0c8a1e5..d035d10 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -19,6 +19,7 @@
#include <linux/list.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/mempool.h>
#include <linux/module.h>
#include <linux/net.h>
#include <net/sock.h>
@@ -39,6 +40,9 @@ struct skcipher_ctx {
struct af_alg_completion completion;
+ struct kmem_cache *cache;
+ mempool_t *pool;
+ atomic_t inflight;
unsigned used;
unsigned int len;
@@ -49,9 +53,135 @@ struct skcipher_ctx {
struct ablkcipher_request req;
};
+struct skcipher_async_rsgl {
+ struct af_alg_sgl sgl;
+ struct list_head list;
+};
+
+struct skcipher_async_req {
+ struct kiocb *iocb;
+ struct skcipher_async_rsgl first_sgl;
+ struct list_head list;
+ struct scatterlist *tsg;
+ char iv[];
+};
+
+#define GET_SREQ(areq, ctx) (struct skcipher_async_req *)((char *)areq + \
+ crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(&ctx->req)))
+
+#define GET_REQ_SIZE(ctx) \
+ crypto_ablkcipher_reqsize(crypto_ablkcipher_reqtfm(&ctx->req))
+
+#define GET_IV_SIZE(ctx) \
+ crypto_ablkcipher_ivsize(crypto_ablkcipher_reqtfm(&ctx->req))
+
#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
sizeof(struct scatterlist) - 1)
+static void skcipher_free_async_sgls(struct skcipher_async_req *sreq)
+{
+ struct skcipher_async_rsgl *rsgl, *tmp;
+ struct scatterlist *sgl;
+ struct scatterlist *sg;
+ int i, n;
+
+ list_for_each_entry_safe(rsgl, tmp, &sreq->list, list) {
+ af_alg_free_sg(&rsgl->sgl);
+ if (rsgl != &sreq->first_sgl)
+ kfree(rsgl);
+ }
+ sgl = sreq->tsg;
+ n = sg_nents(sgl);
+ for_each_sg(sgl, sg, n, i)
+ put_page(sg_page(sg));
+
+ kfree(sreq->tsg);
+}
+
+static void skcipher_async_cb(struct crypto_async_request *req, int err)
+{
+ struct sock *sk = req->data;
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+ struct skcipher_async_req *sreq = GET_SREQ(req, ctx);
+ struct kiocb *iocb = sreq->iocb;
+
+ atomic_dec(&ctx->inflight);
+ skcipher_free_async_sgls(sreq);
+ mempool_free(req, ctx->pool);
+ aio_complete(iocb, err, err);
+}
+
+static void skcipher_mempool_free(void *_req, void *_sk)
+{
+ struct sock *sk = _sk;
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+ struct kmem_cache *cache = ctx->cache;
+
+ kmem_cache_free(cache, _req);
+}
+
+static void *skcipher_mempool_alloc(gfp_t gfp_mask, void *_sk)
+{
+ struct sock *sk = _sk;
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+ struct kmem_cache *cache = ctx->cache;
+ struct ablkcipher_request *req;
+
+ req = kmem_cache_alloc(cache, gfp_mask);
+ if (req) {
+ ablkcipher_request_set_tfm(req,
+ crypto_ablkcipher_reqtfm(&ctx->req));
+ ablkcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ skcipher_async_cb, sk);
+ }
+ return req;
+}
+
+static void skcipher_cache_constructor(void *v)
+{
+ memset(v, 0, sizeof(struct skcipher_async_req));
+}
+
+static int skcipher_mempool_create(struct sock *sk)
+{
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+ unsigned int len = sizeof(struct skcipher_async_req) +
+ GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
+ char buf[32];
+
+ snprintf(buf, sizeof(buf), "skcipher_%p", ctx);
+ ctx->cache = kmem_cache_create(buf, len, 0, SLAB_HWCACHE_ALIGN |
+ SLAB_TEMPORARY,
+ skcipher_cache_constructor);
+ if (unlikely(!ctx->cache))
+ return -ENOMEM;
+
+ ctx->pool = mempool_create(128, skcipher_mempool_alloc,
+ skcipher_mempool_free, sk);
+
+ if (unlikely(!ctx->pool)) {
+ kmem_cache_destroy(ctx->cache);
+ return -ENOMEM;
+ }
+ return 0;
+}
+
+static void skcipher_mempool_destroy(struct skcipher_ctx *ctx)
+{
+ if (ctx->pool)
+ mempool_destroy(ctx->pool);
+
+ if (ctx->cache)
+ kmem_cache_destroy(ctx->cache);
+
+ ctx->cache = NULL;
+ ctx->pool = NULL;
+}
+
static inline int skcipher_sndbuf(struct sock *sk)
{
struct alg_sock *ask = alg_sk(sk);
@@ -96,7 +226,7 @@ static int skcipher_alloc_sgl(struct sock *sk)
return 0;
}
-static void skcipher_pull_sgl(struct sock *sk, int used)
+static void skcipher_pull_sgl(struct sock *sk, int used, int put)
{
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
@@ -123,8 +253,8 @@ static void skcipher_pull_sgl(struct sock *sk, int used)
if (sg[i].length)
return;
-
- put_page(sg_page(sg + i));
+ if (put)
+ put_page(sg_page(sg + i));
sg_assign_page(sg + i, NULL);
}
@@ -143,7 +273,7 @@ static void skcipher_free_sgl(struct sock *sk)
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
- skcipher_pull_sgl(sk, ctx->used);
+ skcipher_pull_sgl(sk, ctx->used, 1);
}
static int skcipher_wait_for_wmem(struct sock *sk, unsigned flags)
@@ -424,8 +554,144 @@ unlock:
return err ?: size;
}
-static int skcipher_recvmsg(struct kiocb *unused, struct socket *sock,
- struct msghdr *msg, size_t ignored, int flags)
+static int skcipher_all_sg_nents(struct skcipher_ctx *ctx)
+{
+ struct skcipher_sg_list *sgl;
+ struct scatterlist *sg;
+ int nents = 0;
+
+ list_for_each_entry(sgl, &ctx->tsgl, list) {
+ sg = sgl->sg;
+
+ while (!sg->length)
+ sg++;
+
+ nents += sg_nents(sg);
+ }
+ return nents;
+}
+
+static int skcipher_recvmsg_async(struct kiocb *iocb, struct socket *sock,
+ struct msghdr *msg, int flags)
+{
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+ struct skcipher_sg_list *sgl;
+ struct scatterlist *sg;
+ struct skcipher_async_req *sreq;
+ struct ablkcipher_request *req;
+ struct skcipher_async_rsgl *last_rsgl = NULL;
+ unsigned int len = 0, tx_nents = skcipher_all_sg_nents(ctx);
+ int i = 0;
+ int err = -ENOMEM;
+
+ lock_sock(sk);
+ req = mempool_alloc(ctx->pool, GFP_KERNEL);
+ if (unlikely(!req))
+ goto unlock;
+
+ sreq = GET_SREQ(req, ctx);
+ sreq->iocb = iocb;
+ INIT_LIST_HEAD(&sreq->list);
+ memcpy(sreq->iv, ctx->iv, GET_IV_SIZE(ctx));
+ sreq->tsg = kcalloc(tx_nents, sizeof(*sg), GFP_KERNEL);
+
+ if (!sreq->tsg) {
+ mempool_free(req, ctx->pool);
+ goto unlock;
+ }
+ sg_init_table(sreq->tsg, tx_nents);
+
+ while (iov_iter_count(&msg->msg_iter)) {
+ struct skcipher_async_rsgl *rsgl;
+ unsigned long used;
+
+ if (!ctx->used) {
+ err = skcipher_wait_for_data(sk, flags);
+ if (err)
+ goto free;
+ }
+ sgl = list_first_entry(&ctx->tsgl,
+ struct skcipher_sg_list, list);
+ sg = sgl->sg;
+
+ while (!sg->length)
+ sg++;
+
+ used = min_t(unsigned long, ctx->used,
+ iov_iter_count(&msg->msg_iter));
+ used = min_t(unsigned long, used, sg->length);
+
+ if (i == tx_nents) {
+ struct scatterlist *tmp;
+ int x;
+ /* Ran out of tx slots in async request
+ * need to expand */
+ tmp = kcalloc(tx_nents * 2, sizeof(*tmp),
+ GFP_KERNEL);
+ if (!tmp)
+ goto free;
+
+ sg_init_table(tmp, tx_nents * 2);
+ for (x = 0; x < tx_nents; x++)
+ sg_set_page(&tmp[x], sg_page(&sreq->tsg[x]),
+ sreq->tsg[x].length,
+ sreq->tsg[x].offset);
+ kfree(sreq->tsg);
+ sreq->tsg = tmp;
+ tx_nents *= 2;
+ }
+ /* Need to take over the tx sgl from ctx
+ * to the asynch req - these sgls will be freed later */
+ sg_set_page(sreq->tsg + i++, sg_page(sg), sg->length,
+ sg->offset);
+
+ if (list_empty(&sreq->list)) {
+ rsgl = &sreq->first_sgl;
+ list_add_tail(&rsgl->list, &sreq->list);
+ } else {
+ rsgl = kzalloc(sizeof(*rsgl), GFP_KERNEL);
+ if (!rsgl) {
+ err = -ENOMEM;
+ goto free;
+ }
+ list_add_tail(&rsgl->list, &sreq->list);
+ }
+
+ used = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, used);
+ err = used;
+ if (used < 0)
+ goto free;
+ if (last_rsgl)
+ af_alg_link_sg(&last_rsgl->sgl, &rsgl->sgl);
+
+ last_rsgl = rsgl;
+ len += used;
+ skcipher_pull_sgl(sk, used, 0);
+ iov_iter_advance(&msg->msg_iter, used);
+ }
+
+ ablkcipher_request_set_crypt(req, sreq->tsg, sreq->first_sgl.sgl.sg,
+ len, sreq->iv);
+ err = ctx->enc ? crypto_ablkcipher_encrypt(req) :
+ crypto_ablkcipher_decrypt(req);
+ if (err == -EINPROGRESS) {
+ atomic_inc(&ctx->inflight);
+ err = -EIOCBQUEUED;
+ goto unlock;
+ }
+free:
+ skcipher_free_async_sgls(sreq);
+ mempool_free(req, ctx->pool);
+unlock:
+ skcipher_wmem_wakeup(sk);
+ release_sock(sk);
+ return err;
+}
+
+static int skcipher_recvmsg_sync(struct socket *sock, struct msghdr *msg,
+ int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
@@ -484,7 +750,7 @@ free:
goto unlock;
copied += used;
- skcipher_pull_sgl(sk, used);
+ skcipher_pull_sgl(sk, used, 1);
iov_iter_advance(&msg->msg_iter, used);
}
@@ -497,6 +763,13 @@ unlock:
return copied ?: err;
}
+static int skcipher_recvmsg(struct kiocb *iocb, struct socket *sock,
+ struct msghdr *msg, size_t ignored, int flags)
+{
+ return is_sync_kiocb(iocb) ?
+ skcipher_recvmsg_sync(sock, msg, flags) :
+ skcipher_recvmsg_async(iocb, sock, msg, flags);
+}
static unsigned int skcipher_poll(struct file *file, struct socket *sock,
poll_table *wait)
@@ -555,12 +828,26 @@ static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen)
return crypto_ablkcipher_setkey(private, key, keylen);
}
+static void skcipher_wait(struct sock *sk)
+{
+ struct alg_sock *ask = alg_sk(sk);
+ struct skcipher_ctx *ctx = ask->private;
+ int ctr = 0;
+
+ while (atomic_read(&ctx->inflight) && ctr++ < 100)
+ msleep(100);
+}
+
static void skcipher_sock_destruct(struct sock *sk)
{
struct alg_sock *ask = alg_sk(sk);
struct skcipher_ctx *ctx = ask->private;
struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(&ctx->req);
+ if (atomic_read(&ctx->inflight))
+ skcipher_wait(sk);
+
+ skcipher_mempool_destroy(ctx);
skcipher_free_sgl(sk);
sock_kzfree_s(sk, ctx->iv, crypto_ablkcipher_ivsize(tfm));
sock_kfree_s(sk, ctx, ctx->len);
@@ -592,6 +879,7 @@ static int skcipher_accept_parent(void *private, struct sock *sk)
ctx->more = 0;
ctx->merge = 0;
ctx->enc = 0;
+ atomic_set(&ctx->inflight, 0);
af_alg_init_completion(&ctx->completion);
ask->private = ctx;
@@ -600,6 +888,12 @@ static int skcipher_accept_parent(void *private, struct sock *sk)
ablkcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
af_alg_complete, &ctx->completion);
+ if (skcipher_mempool_create(sk)) {
+ sock_kzfree_s(sk, ctx->iv, crypto_ablkcipher_ivsize(private));
+ sock_kfree_s(sk, ctx, ctx->len);
+ return -ENOMEM;
+ }
+
sk->sk_destruct = skcipher_sock_destruct;
return 0;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous
2015-02-27 19:35 ` [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
@ 2015-03-06 11:44 ` Herbert Xu
2015-03-06 12:04 ` Tadeusz Struk
0 siblings, 1 reply; 9+ messages in thread
From: Herbert Xu @ 2015-03-06 11:44 UTC (permalink / raw)
To: Tadeusz Struk; +Cc: davem, linux-crypto, qat-linux
On Fri, Feb 27, 2015 at 11:35:49AM -0800, Tadeusz Struk wrote:
>
> +static int skcipher_mempool_create(struct sock *sk)
> +{
> + struct alg_sock *ask = alg_sk(sk);
> + struct skcipher_ctx *ctx = ask->private;
> + unsigned int len = sizeof(struct skcipher_async_req) +
> + GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
> + char buf[32];
> +
> + snprintf(buf, sizeof(buf), "skcipher_%p", ctx);
> + ctx->cache = kmem_cache_create(buf, len, 0, SLAB_HWCACHE_ALIGN |
> + SLAB_TEMPORARY,
> + skcipher_cache_constructor);
Are these separate caches really necessary? It looks like an
overkill. What's wrong with just kmalloc?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous
2015-03-06 11:44 ` Herbert Xu
@ 2015-03-06 12:04 ` Tadeusz Struk
2015-03-06 12:09 ` Herbert Xu
0 siblings, 1 reply; 9+ messages in thread
From: Tadeusz Struk @ 2015-03-06 12:04 UTC (permalink / raw)
To: Herbert Xu; +Cc: davem, linux-crypto, qat-linux
On 03/06/2015 11:44 AM, Herbert Xu wrote:
> Are these separate caches really necessary? It looks like an
> overkill. What's wrong with just kmalloc?
Hi Herbert,
It helps to make it faster.
This way I can do some of the request setup beforehand and minimize overhead on the data path.
Thanks,
Tadeusz
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous
2015-03-06 12:04 ` Tadeusz Struk
@ 2015-03-06 12:09 ` Herbert Xu
2015-03-09 20:06 ` Tadeusz Struk
0 siblings, 1 reply; 9+ messages in thread
From: Herbert Xu @ 2015-03-06 12:09 UTC (permalink / raw)
To: Tadeusz Struk; +Cc: davem, linux-crypto, qat-linux
On Fri, Mar 06, 2015 at 12:04:49PM +0000, Tadeusz Struk wrote:
> It helps to make it faster.
> This way I can do some of the request setup beforehand and minimize overhead on the data path.
Do you have numbers to back this up?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous
2015-02-27 19:35 [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 1/2] crypto: af_alg - Allow to link sgl Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
@ 2015-03-08 11:20 ` Stephan Mueller
2015-03-09 15:10 ` Tadeusz Struk
2 siblings, 1 reply; 9+ messages in thread
From: Stephan Mueller @ 2015-03-08 11:20 UTC (permalink / raw)
To: Tadeusz Struk; +Cc: herbert, davem, linux-crypto, qat-linux
Am Freitag, 27. Februar 2015, 11:35:38 schrieb Tadeusz Struk:
Hi Tadeusz,
> The way the algif_skcipher works currently is that on sendmsg/sendpage it
> builds an sgl for the input data and then on read/recvmsg it sends the job
> for encryption putting the user to sleep till the data is processed.
> This way it can only handle one job at a given time.
> To be able to fuly utilize the potential of existing crypto hardware
> accelerators it is required to submit multiple jobs in asynchronously.
> This series adds support for asynchronous operations for algif_skcipher.
> First patch enables af_alg sgl to be linked.
> Second patch implement asynch read for skcipher.
>
> Resend with updates for 4.0
As you may know, I am working on libkcapi [1] to cover the user space AF_ALG
interface and provide an easy-to use interface for applications.
I am now trying to cover your interface proposal there and also measure the
speed of it. What bothers me currently is the following: how would I be able
to detect whether the implemented support is available in the current kernel?
For the different AF_ALG interfaces (like the recently added AEAD), I can
detect missing support at the time I call the socket() syscall. Thus, the
application knows right from the start whether we have the requested support.
But for the AIO interface, how can I detect that at runtime at a time an
application can reasonably catch a missing support?
[1] http://www.chronox.de/libkcapi.html
>
> Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com>
> ---
> Tadeusz Struk (2):
> crypto: af_alg - Allow to link sgl
> crypto: algif - change algif_skcipher to be asynchronous
>
>
> crypto/af_alg.c | 18 ++-
> crypto/algif_skcipher.c | 308
> ++++++++++++++++++++++++++++++++++++++++++++++- include/crypto/if_alg.h |
> 4 -
> 3 files changed, 317 insertions(+), 13 deletions(-)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Ciao
Stephan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous
2015-03-08 11:20 ` [PATCH RESEND 0/2] " Stephan Mueller
@ 2015-03-09 15:10 ` Tadeusz Struk
0 siblings, 0 replies; 9+ messages in thread
From: Tadeusz Struk @ 2015-03-09 15:10 UTC (permalink / raw)
To: Stephan Mueller; +Cc: herbert, davem, linux-crypto, qat-linux
Hi Stephan,
On 03/08/2015 11:20 AM, Stephan Mueller wrote:
> As you may know, I am working on libkcapi [1] to cover the user space AF_ALG
> interface and provide an easy-to use interface for applications.
>
> I am now trying to cover your interface proposal there and also measure the
> speed of it. What bothers me currently is the following: how would I be able
> to detect whether the implemented support is available in the current kernel?
You don't need to detect that. You can just use the new interface, and in the older
kernels without AIO support in skcipher it will seamlessly fallback to the synchronous interface.
The only problem is that the socket layer didn't like asynch operations before commit
06539d3071067ff146a9bffd1c801fa56d290909.
The way it can be tackled for the older kernels is the distros will need update their kernels
with the mentioned commit. They should do it anyway because it looks like that was a bug.
One other thing to keep in mind is - to be able to see the benefit of AIO you need to use
some encryption offload engine, where you can asynchronously offload multiple operations.
With my dh895xxcc engine I see around 8x throughput improvement.
Regards,
Tadeusz
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous
2015-03-06 12:09 ` Herbert Xu
@ 2015-03-09 20:06 ` Tadeusz Struk
0 siblings, 0 replies; 9+ messages in thread
From: Tadeusz Struk @ 2015-03-09 20:06 UTC (permalink / raw)
To: Herbert Xu; +Cc: davem, linux-crypto, qat-linux
On 03/06/2015 04:09 AM, Herbert Xu wrote:
>> It helps to make it faster.
>> > This way I can do some of the request setup beforehand and minimize overhead on the data path.
> Do you have numbers to back this up?
Ok, you are right. It was implemented that way when I used the qat type socket, which was rejected.
It made more sense to do it that way as there was some dma remapping that I could do upfront.
Now the difference in performance is negligible. I'll send v2 shortly.
Thanks for pointing this out.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2015-03-09 20:10 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-02-27 19:35 [PATCH RESEND 0/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 1/2] crypto: af_alg - Allow to link sgl Tadeusz Struk
2015-02-27 19:35 ` [PATCH RESEND 2/2] crypto: algif - change algif_skcipher to be asynchronous Tadeusz Struk
2015-03-06 11:44 ` Herbert Xu
2015-03-06 12:04 ` Tadeusz Struk
2015-03-06 12:09 ` Herbert Xu
2015-03-09 20:06 ` Tadeusz Struk
2015-03-08 11:20 ` [PATCH RESEND 0/2] " Stephan Mueller
2015-03-09 15:10 ` Tadeusz Struk
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).