linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] crypto: tcrypt enhancements
@ 2020-11-20 11:04 Ard Biesheuvel
  2020-11-20 11:04 ` [PATCH v2 1/3] crypto: tcrypt - don't initialize at subsys_initcall time Ard Biesheuvel
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-11-20 11:04 UTC (permalink / raw)
  To: linux-crypto; +Cc: herbert, ebiggers, Ard Biesheuvel

Some tcrypt enhancements that I have been using locally to test and
benchmark crypto algorithms on the command line using KVM:
- allow tcrypt.ko to be builtin and defer its initialization to late_initcall
- add 1420 byte blocks to the list of benchmarked block sizes for AEADs and
  skciphers, to get an estimate of the performance in the context of a VPN

Changes since v1:
- use CONFIG_EXPERT not CONFIG_CRYPTO_MANAGER_EXTRA_TESTS to decide whether
  tcrypt.ko may be built in
- add Eric's ack to #1

Ard Biesheuvel (3):
  crypto: tcrypt - don't initialize at subsys_initcall time
  crypto: tcrypt - permit tcrypt.ko to be builtin
  crypto: tcrypt - include 1420 byte blocks in aead and skcipher
    benchmarks

 crypto/Kconfig  |  2 +-
 crypto/tcrypt.c | 83 +++++++++++---------
 2 files changed, 46 insertions(+), 39 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/3] crypto: tcrypt - don't initialize at subsys_initcall time
  2020-11-20 11:04 [PATCH v2 0/3] crypto: tcrypt enhancements Ard Biesheuvel
@ 2020-11-20 11:04 ` Ard Biesheuvel
  2020-11-20 11:04 ` [PATCH v2 2/3] crypto: tcrypt - permit tcrypt.ko to be builtin Ard Biesheuvel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-11-20 11:04 UTC (permalink / raw)
  To: linux-crypto; +Cc: herbert, ebiggers, Ard Biesheuvel

Commit c4741b2305979 ("crypto: run initcalls for generic implementations
earlier") converted tcrypt.ko's module_init() to subsys_initcall(), but
this was unintentional: tcrypt.ko currently cannot be built into the core
kernel, and so the subsys_initcall() gets converted into module_init()
under the hood. Given that tcrypt.ko does not implement a generic version
of a crypto algorithm that has to be available early during boot, there
is no point in running the tcrypt init code earlier than implied by
module_init().

However, for crypto development purposes, we will lift the restriction
that tcrypt.ko must be built as a module, and when builtin, it makes sense
for tcrypt.ko (which does its work inside the module init function) to run
as late as possible. So let's switch to late_initcall() instead.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
---
 crypto/tcrypt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index eea0f453cfb6..fc1f3e516694 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -3066,7 +3066,7 @@ static int __init tcrypt_mod_init(void)
  */
 static void __exit tcrypt_mod_fini(void) { }
 
-subsys_initcall(tcrypt_mod_init);
+late_initcall(tcrypt_mod_init);
 module_exit(tcrypt_mod_fini);
 
 module_param(alg, charp, 0);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/3] crypto: tcrypt - permit tcrypt.ko to be builtin
  2020-11-20 11:04 [PATCH v2 0/3] crypto: tcrypt enhancements Ard Biesheuvel
  2020-11-20 11:04 ` [PATCH v2 1/3] crypto: tcrypt - don't initialize at subsys_initcall time Ard Biesheuvel
@ 2020-11-20 11:04 ` Ard Biesheuvel
  2020-11-20 11:04 ` [PATCH v2 3/3] crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks Ard Biesheuvel
  2020-11-27  6:26 ` [PATCH v2 0/3] crypto: tcrypt enhancements Herbert Xu
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-11-20 11:04 UTC (permalink / raw)
  To: linux-crypto; +Cc: herbert, ebiggers, Ard Biesheuvel

When working on crypto algorithms, being able to run tcrypt quickly
without booting an entire Linux installation can be very useful. For
instance, QEMU/kvm can be used to boot a kernel from the command line,
and having tcrypt.ko builtin would allow tcrypt to be executed to run
benchmarks, or to run tests for algorithms that need to be instantiated
from templates, without the need to make it past the point where the
rootfs is mounted.

So let's relax the requirement that tcrypt can only be built as a module
when CONFIG_EXPERT is enabled.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 crypto/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 094ef56ab7b4..9014a0ec6a0e 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -201,7 +201,7 @@ config CRYPTO_AUTHENC
 
 config CRYPTO_TEST
 	tristate "Testing module"
-	depends on m
+	depends on m || EXPERT
 	select CRYPTO_MANAGER
 	help
 	  Quick & dirty crypto test module.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 3/3] crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks
  2020-11-20 11:04 [PATCH v2 0/3] crypto: tcrypt enhancements Ard Biesheuvel
  2020-11-20 11:04 ` [PATCH v2 1/3] crypto: tcrypt - don't initialize at subsys_initcall time Ard Biesheuvel
  2020-11-20 11:04 ` [PATCH v2 2/3] crypto: tcrypt - permit tcrypt.ko to be builtin Ard Biesheuvel
@ 2020-11-20 11:04 ` Ard Biesheuvel
  2020-11-27  6:26 ` [PATCH v2 0/3] crypto: tcrypt enhancements Herbert Xu
  3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2020-11-20 11:04 UTC (permalink / raw)
  To: linux-crypto; +Cc: herbert, ebiggers, Ard Biesheuvel

WireGuard and IPsec both typically operate on input blocks that are
~1420 bytes in size, given the default Ethernet MTU of 1500 bytes and
the overhead of the VPN metadata.

Many aead and sckipher implementations are optimized for power-of-2
block sizes, and whether they perform well when operating on 1420
byte blocks cannot be easily extrapolated from the performance on
power-of-2 block size. So let's add 1420 bytes explicitly, and round
it up to the next blocksize multiple of the algo in question if it
does not support 1420 byte blocks.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 crypto/tcrypt.c | 81 +++++++++++---------
 1 file changed, 44 insertions(+), 37 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index fc1f3e516694..a647bb298fbc 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -77,8 +77,8 @@ static const char *check[] = {
 	NULL
 };
 
-static u32 block_sizes[] = { 16, 64, 256, 1024, 1472, 8192, 0 };
-static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 };
+static const int block_sizes[] = { 16, 64, 256, 1024, 1420, 4096, 0 };
+static const int aead_sizes[] = { 16, 64, 256, 512, 1024, 1420, 4096, 8192, 0 };
 
 #define XBUFSIZE 8
 #define MAX_IVLEN 32
@@ -256,10 +256,10 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
 	struct test_mb_aead_data *data;
 	struct crypto_aead *tfm;
 	unsigned int i, j, iv_len;
+	const int *b_size;
 	const char *key;
 	const char *e;
 	void *assoc;
-	u32 *b_size;
 	char *iv;
 	int ret;
 
@@ -337,15 +337,17 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
 	do {
 		b_size = aead_sizes;
 		do {
-			if (*b_size + authsize > XBUFSIZE * PAGE_SIZE) {
+			int bs = round_up(*b_size, crypto_aead_blocksize(tfm));
+
+			if (bs + authsize > XBUFSIZE * PAGE_SIZE) {
 				pr_err("template (%u) too big for buffer (%lu)\n",
-				       authsize + *b_size,
+				       authsize + bs,
 				       XBUFSIZE * PAGE_SIZE);
 				goto out;
 			}
 
 			pr_info("test %u (%d bit key, %d byte blocks): ", i,
-				*keysize * 8, *b_size);
+				*keysize * 8, bs);
 
 			/* Set up tfm global state, i.e. the key */
 
@@ -380,11 +382,11 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
 				memset(assoc, 0xff, aad_size);
 
 				sg_init_aead(cur->sg, cur->xbuf,
-					     *b_size + (enc ? 0 : authsize),
+					     bs + (enc ? 0 : authsize),
 					     assoc, aad_size);
 
 				sg_init_aead(cur->sgout, cur->xoutbuf,
-					     *b_size + (enc ? authsize : 0),
+					     bs + (enc ? authsize : 0),
 					     assoc, aad_size);
 
 				aead_request_set_ad(cur->req, aad_size);
@@ -394,7 +396,7 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
 					aead_request_set_crypt(cur->req,
 							       cur->sgout,
 							       cur->sg,
-							       *b_size, iv);
+							       bs, iv);
 					ret = crypto_aead_encrypt(cur->req);
 					ret = do_one_aead_op(cur->req, ret);
 
@@ -406,18 +408,18 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs,
 				}
 
 				aead_request_set_crypt(cur->req, cur->sg,
-						       cur->sgout, *b_size +
+						       cur->sgout, bs +
 						       (enc ? 0 : authsize),
 						       iv);
 
 			}
 
 			if (secs) {
-				ret = test_mb_aead_jiffies(data, enc, *b_size,
+				ret = test_mb_aead_jiffies(data, enc, bs,
 							   secs, num_mb);
 				cond_resched();
 			} else {
-				ret = test_mb_aead_cycles(data, enc, *b_size,
+				ret = test_mb_aead_cycles(data, enc, bs,
 							  num_mb);
 			}
 
@@ -534,7 +536,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
 	char *xbuf[XBUFSIZE];
 	char *xoutbuf[XBUFSIZE];
 	char *axbuf[XBUFSIZE];
-	unsigned int *b_size;
+	const int *b_size;
 	unsigned int iv_len;
 	struct crypto_wait wait;
 
@@ -590,12 +592,14 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
 	do {
 		b_size = aead_sizes;
 		do {
+			u32 bs = round_up(*b_size, crypto_aead_blocksize(tfm));
+
 			assoc = axbuf[0];
 			memset(assoc, 0xff, aad_size);
 
-			if ((*keysize + *b_size) > TVMEMSIZE * PAGE_SIZE) {
+			if ((*keysize + bs) > TVMEMSIZE * PAGE_SIZE) {
 				pr_err("template (%u) too big for tvmem (%lu)\n",
-				       *keysize + *b_size,
+				       *keysize + bs,
 					TVMEMSIZE * PAGE_SIZE);
 				goto out;
 			}
@@ -616,7 +620,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
 
 			crypto_aead_clear_flags(tfm, ~0);
 			printk(KERN_INFO "test %u (%d bit key, %d byte blocks): ",
-					i, *keysize * 8, *b_size);
+					i, *keysize * 8, bs);
 
 
 			memset(tvmem[0], 0xff, PAGE_SIZE);
@@ -627,11 +631,11 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
 				goto out;
 			}
 
-			sg_init_aead(sg, xbuf, *b_size + (enc ? 0 : authsize),
+			sg_init_aead(sg, xbuf, bs + (enc ? 0 : authsize),
 				     assoc, aad_size);
 
 			sg_init_aead(sgout, xoutbuf,
-				     *b_size + (enc ? authsize : 0), assoc,
+				     bs + (enc ? authsize : 0), assoc,
 				     aad_size);
 
 			aead_request_set_ad(req, aad_size);
@@ -644,7 +648,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
 				 * reversed (input <-> output) to calculate it
 				 */
 				aead_request_set_crypt(req, sgout, sg,
-						       *b_size, iv);
+						       bs, iv);
 				ret = do_one_aead_op(req,
 						     crypto_aead_encrypt(req));
 
@@ -656,15 +660,15 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
 			}
 
 			aead_request_set_crypt(req, sg, sgout,
-					       *b_size + (enc ? 0 : authsize),
+					       bs + (enc ? 0 : authsize),
 					       iv);
 
 			if (secs) {
-				ret = test_aead_jiffies(req, enc, *b_size,
+				ret = test_aead_jiffies(req, enc, bs,
 							secs);
 				cond_resched();
 			} else {
-				ret = test_aead_cycles(req, enc, *b_size);
+				ret = test_aead_cycles(req, enc, bs);
 			}
 
 			if (ret) {
@@ -1253,9 +1257,9 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
 	struct test_mb_skcipher_data *data;
 	struct crypto_skcipher *tfm;
 	unsigned int i, j, iv_len;
+	const int *b_size;
 	const char *key;
 	const char *e;
-	u32 *b_size;
 	char iv[128];
 	int ret;
 
@@ -1316,14 +1320,16 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
 	do {
 		b_size = block_sizes;
 		do {
-			if (*b_size > XBUFSIZE * PAGE_SIZE) {
+			u32 bs = round_up(*b_size, crypto_skcipher_blocksize(tfm));
+
+			if (bs > XBUFSIZE * PAGE_SIZE) {
 				pr_err("template (%u) too big for buffer (%lu)\n",
 				       *b_size, XBUFSIZE * PAGE_SIZE);
 				goto out;
 			}
 
 			pr_info("test %u (%d bit key, %d byte blocks): ", i,
-				*keysize * 8, *b_size);
+				*keysize * 8, bs);
 
 			/* Set up tfm global state, i.e. the key */
 
@@ -1353,7 +1359,7 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
 
 			for (j = 0; j < num_mb; ++j) {
 				struct test_mb_skcipher_data *cur = &data[j];
-				unsigned int k = *b_size;
+				unsigned int k = bs;
 				unsigned int pages = DIV_ROUND_UP(k, PAGE_SIZE);
 				unsigned int p = 0;
 
@@ -1377,12 +1383,12 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs,
 
 			if (secs) {
 				ret = test_mb_acipher_jiffies(data, enc,
-							      *b_size, secs,
+							      bs, secs,
 							      num_mb);
 				cond_resched();
 			} else {
 				ret = test_mb_acipher_cycles(data, enc,
-							     *b_size, num_mb);
+							     bs, num_mb);
 			}
 
 			if (ret) {
@@ -1497,8 +1503,8 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs,
 	char iv[128];
 	struct skcipher_request *req;
 	struct crypto_skcipher *tfm;
+	const int *b_size;
 	const char *e;
-	u32 *b_size;
 
 	if (enc == ENCRYPT)
 		e = "encryption";
@@ -1533,17 +1539,18 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs,
 		b_size = block_sizes;
 
 		do {
+			u32 bs = round_up(*b_size, crypto_skcipher_blocksize(tfm));
 			struct scatterlist sg[TVMEMSIZE];
 
-			if ((*keysize + *b_size) > TVMEMSIZE * PAGE_SIZE) {
+			if ((*keysize + bs) > TVMEMSIZE * PAGE_SIZE) {
 				pr_err("template (%u) too big for "
-				       "tvmem (%lu)\n", *keysize + *b_size,
+				       "tvmem (%lu)\n", *keysize + bs,
 				       TVMEMSIZE * PAGE_SIZE);
 				goto out_free_req;
 			}
 
 			pr_info("test %u (%d bit key, %d byte blocks): ", i,
-				*keysize * 8, *b_size);
+				*keysize * 8, bs);
 
 			memset(tvmem[0], 0xff, PAGE_SIZE);
 
@@ -1565,7 +1572,7 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs,
 				goto out_free_req;
 			}
 
-			k = *keysize + *b_size;
+			k = *keysize + bs;
 			sg_init_table(sg, DIV_ROUND_UP(k, PAGE_SIZE));
 
 			if (k > PAGE_SIZE) {
@@ -1582,22 +1589,22 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs,
 				sg_set_buf(sg + j, tvmem[j], k);
 				memset(tvmem[j], 0xff, k);
 			} else {
-				sg_set_buf(sg, tvmem[0] + *keysize, *b_size);
+				sg_set_buf(sg, tvmem[0] + *keysize, bs);
 			}
 
 			iv_len = crypto_skcipher_ivsize(tfm);
 			if (iv_len)
 				memset(&iv, 0xff, iv_len);
 
-			skcipher_request_set_crypt(req, sg, sg, *b_size, iv);
+			skcipher_request_set_crypt(req, sg, sg, bs, iv);
 
 			if (secs) {
 				ret = test_acipher_jiffies(req, enc,
-							   *b_size, secs);
+							   bs, secs);
 				cond_resched();
 			} else {
 				ret = test_acipher_cycles(req, enc,
-							  *b_size);
+							  bs);
 			}
 
 			if (ret) {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 0/3] crypto: tcrypt enhancements
  2020-11-20 11:04 [PATCH v2 0/3] crypto: tcrypt enhancements Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2020-11-20 11:04 ` [PATCH v2 3/3] crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks Ard Biesheuvel
@ 2020-11-27  6:26 ` Herbert Xu
  3 siblings, 0 replies; 5+ messages in thread
From: Herbert Xu @ 2020-11-27  6:26 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: linux-crypto, ebiggers

On Fri, Nov 20, 2020 at 12:04:30PM +0100, Ard Biesheuvel wrote:
> Some tcrypt enhancements that I have been using locally to test and
> benchmark crypto algorithms on the command line using KVM:
> - allow tcrypt.ko to be builtin and defer its initialization to late_initcall
> - add 1420 byte blocks to the list of benchmarked block sizes for AEADs and
>   skciphers, to get an estimate of the performance in the context of a VPN
> 
> Changes since v1:
> - use CONFIG_EXPERT not CONFIG_CRYPTO_MANAGER_EXTRA_TESTS to decide whether
>   tcrypt.ko may be built in
> - add Eric's ack to #1
> 
> Ard Biesheuvel (3):
>   crypto: tcrypt - don't initialize at subsys_initcall time
>   crypto: tcrypt - permit tcrypt.ko to be builtin
>   crypto: tcrypt - include 1420 byte blocks in aead and skcipher
>     benchmarks
> 
>  crypto/Kconfig  |  2 +-
>  crypto/tcrypt.c | 83 +++++++++++---------
>  2 files changed, 46 insertions(+), 39 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-11-27  6:26 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-11-20 11:04 [PATCH v2 0/3] crypto: tcrypt enhancements Ard Biesheuvel
2020-11-20 11:04 ` [PATCH v2 1/3] crypto: tcrypt - don't initialize at subsys_initcall time Ard Biesheuvel
2020-11-20 11:04 ` [PATCH v2 2/3] crypto: tcrypt - permit tcrypt.ko to be builtin Ard Biesheuvel
2020-11-20 11:04 ` [PATCH v2 3/3] crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks Ard Biesheuvel
2020-11-27  6:26 ` [PATCH v2 0/3] crypto: tcrypt enhancements Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).