From: Eric Biggers <ebiggers@kernel.org>
To: linux-crypto@vger.kernel.org
Cc: Paul Crowley <paulcrowley@google.com>,
Martin Willi <martin@strongswan.org>,
Milan Broz <gmazyland@gmail.com>,
"Jason A . Donenfeld" <Jason@zx2c4.com>,
linux-kernel@vger.kernel.org
Subject: [PATCH v3 6/6] crypto: x86/chacha - yield the FPU occasionally
Date: Tue, 4 Dec 2018 22:20:05 -0800 [thread overview]
Message-ID: <20181205062005.27727-7-ebiggers@kernel.org> (raw)
In-Reply-To: <20181205062005.27727-1-ebiggers@kernel.org>
From: Eric Biggers <ebiggers@google.com>
To improve responsiveness, yield the FPU (temporarily re-enabling
preemption) every 4 KiB encrypted/decrypted, rather than keeping
preemption disabled during the entire encryption/decryption operation.
Alternatively we could do this for every skcipher_walk step, but steps
may be small in some cases, and yielding the FPU is expensive on x86.
Suggested-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
arch/x86/crypto/chacha_glue.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c
index d19c2908be90..9b1d3fac4943 100644
--- a/arch/x86/crypto/chacha_glue.c
+++ b/arch/x86/crypto/chacha_glue.c
@@ -132,6 +132,7 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
{
u32 *state, state_buf[16 + 2] __aligned(8);
struct skcipher_walk walk;
+ int next_yield = 4096; /* bytes until next FPU yield */
int err;
BUILD_BUG_ON(CHACHA_STATE_ALIGN != 16);
@@ -144,12 +145,21 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
- if (nbytes < walk.total)
+ if (nbytes < walk.total) {
nbytes = round_down(nbytes, walk.stride);
+ next_yield -= nbytes;
+ }
chacha_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr,
nbytes, ctx->nrounds);
+ if (next_yield <= 0) {
+ /* temporarily allow preemption */
+ kernel_fpu_end();
+ kernel_fpu_begin();
+ next_yield = 4096;
+ }
+
err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
}
--
2.19.2
next prev parent reply other threads:[~2018-12-05 6:21 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-05 6:19 [PATCH v3 0/6] crypto: x86_64 optimized XChaCha and NHPoly1305 (for Adiantum) Eric Biggers
2018-12-05 6:20 ` [PATCH v3 1/6] crypto: x86/nhpoly1305 - add SSE2 accelerated NHPoly1305 Eric Biggers
2018-12-05 6:20 ` [PATCH v3 2/6] crypto: x86/nhpoly1305 - add AVX2 " Eric Biggers
2018-12-05 6:20 ` [PATCH v3 3/6] crypto: x86/chacha20 - add XChaCha20 support Eric Biggers
2018-12-05 6:20 ` [PATCH v3 4/6] crypto: x86/chacha20 - refactor to allow varying number of rounds Eric Biggers
2018-12-05 6:20 ` [PATCH v3 5/6] crypto: x86/chacha - add XChaCha12 support Eric Biggers
2018-12-05 6:20 ` Eric Biggers [this message]
2018-12-13 10:32 ` [PATCH v3 0/6] crypto: x86_64 optimized XChaCha and NHPoly1305 (for Adiantum) Herbert Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181205062005.27727-7-ebiggers@kernel.org \
--to=ebiggers@kernel.org \
--cc=Jason@zx2c4.com \
--cc=gmazyland@gmail.com \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=martin@strongswan.org \
--cc=paulcrowley@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).