From: Eric Biggers <ebiggers@kernel.org>
To: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>,
linux-fscrypt@vger.kernel.org,
Gilad Ben-Yossef <gilad@benyossef.com>,
device-mapper development <dm-devel@redhat.com>,
"open list:HARDWARE RANDOM NUMBER GENERATOR CORE"
<linux-crypto@vger.kernel.org>, Milan Broz <gmazyland@gmail.com>
Subject: Re: [dm-devel] [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation
Date: Tue, 25 Jun 2019 21:49:42 -0700 [thread overview]
Message-ID: <20190626044942.GB23471@sol.localdomain> (raw)
In-Reply-To: <CAKv+Gu-ZO9Fnfx06XYJ-tp+6nrk0D8TBGa2chmxFW-kjPMmLCw@mail.gmail.com>
On Sun, Jun 23, 2019 at 11:30:41AM +0200, Ard Biesheuvel wrote:
> On Fri, 21 Jun 2019 at 10:09, Ard Biesheuvel <ard.biesheuvel@arm.com> wrote:
> >
> > From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >
> ...
> >
> > - given that hardware already exists that can perform en/decryption including
> > ESSIV generation of a range of blocks, it would be useful to encapsulate
> > this in the ESSIV template, and teach at least dm-crypt how to use it
> > (given that it often processes 8 512-byte sectors at a time)
>
> I thought about this a bit more, and it occurred to me that the
> capability of issuing several sectors at a time and letting the lower
> layers increment the IV between sectors is orthogonal to whether ESSIV
> is being used or not, and so it probably belongs in another wrapper.
>
> I.e., if we define a skcipher template like dmplain64le(), which is
> defined as taking a sector size as part of the key, and which
> increments a 64 LE counter between sectors if multiple are passed, it
> can be used not only for ESSIV but also for XTS, which I assume can be
> h/w accelerated in the same way.
>
> So with that in mind, I think we should decouple the multi-sector
> discussion and leave it for a followup series, preferably proposed by
> someone who also has access to some hardware to prototype it on.
>
This makes sense, but if we're going to leave that functionality out of the
essiv template, I think we should revisit whether the essiv template takes a
__le64 sector number vs. just an IV matching the cipher block size. To me,
defining the IV to be a __le64 seems like a layering violation. Also, dm-crypt
and fscrypt already know how to zero-pad the sector number to form the full 16
byte IV, and your patch just duplicates that logic in the essiv template too,
which makes it more complicated than necessary.
E.g., the following incremental patch for the skcipher case would simplify it:
(You'd have to do it for the AEAD case too.)
diff --git a/crypto/essiv.c b/crypto/essiv.c
index 8e80814ec7d6..737e92ebcbd8 100644
--- a/crypto/essiv.c
+++ b/crypto/essiv.c
@@ -57,11 +57,6 @@ struct essiv_tfm_ctx {
struct crypto_shash *hash;
};
-struct essiv_skcipher_request_ctx {
- u8 iv[MAX_INNER_IV_SIZE];
- struct skcipher_request skcipher_req;
-};
-
struct essiv_aead_request_ctx {
u8 iv[2][MAX_INNER_IV_SIZE];
struct scatterlist src[4], dst[4];
@@ -161,39 +156,32 @@ static void essiv_skcipher_done(struct crypto_async_request *areq, int err)
skcipher_request_complete(req, err);
}
-static void essiv_skcipher_prepare_subreq(struct skcipher_request *req)
+static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
- struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
- struct skcipher_request *subreq = &rctx->skcipher_req;
-
- memset(rctx->iv, 0, crypto_cipher_blocksize(tctx->essiv_cipher));
- memcpy(rctx->iv, req->iv, crypto_skcipher_ivsize(tfm));
+ struct skcipher_request *subreq = skcipher_request_ctx(req);
- crypto_cipher_encrypt_one(tctx->essiv_cipher, rctx->iv, rctx->iv);
+ crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv);
skcipher_request_set_tfm(subreq, tctx->u.skcipher);
skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
- rctx->iv);
+ req->iv);
skcipher_request_set_callback(subreq, skcipher_request_flags(req),
essiv_skcipher_done, req);
+
+ return enc ? crypto_skcipher_encrypt(subreq) :
+ crypto_skcipher_decrypt(subreq);
}
static int essiv_skcipher_encrypt(struct skcipher_request *req)
{
- struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-
- essiv_skcipher_prepare_subreq(req);
- return crypto_skcipher_encrypt(&rctx->skcipher_req);
+ return essiv_skcipher_crypt(req, true);
}
static int essiv_skcipher_decrypt(struct skcipher_request *req)
{
- struct essiv_skcipher_request_ctx *rctx = skcipher_request_ctx(req);
-
- essiv_skcipher_prepare_subreq(req);
- return crypto_skcipher_decrypt(&rctx->skcipher_req);
+ return essiv_skcipher_crypt(req, false);
}
static void essiv_aead_done(struct crypto_async_request *areq, int err)
@@ -300,24 +288,14 @@ static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
struct crypto_skcipher *skcipher;
- unsigned int subreq_size;
int err;
- BUILD_BUG_ON(offsetofend(struct essiv_skcipher_request_ctx,
- skcipher_req) !=
- sizeof(struct essiv_skcipher_request_ctx));
-
skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn);
if (IS_ERR(skcipher))
return PTR_ERR(skcipher);
- subreq_size = FIELD_SIZEOF(struct essiv_skcipher_request_ctx,
- skcipher_req) +
- crypto_skcipher_reqsize(skcipher);
-
- crypto_skcipher_set_reqsize(tfm,
- offsetof(struct essiv_skcipher_request_ctx,
- skcipher_req) + subreq_size);
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
+ crypto_skcipher_reqsize(skcipher));
err = essiv_init_tfm(ictx, tctx);
if (err) {
@@ -567,9 +545,9 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
skcipher_inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(skcipher_alg);
skcipher_inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(skcipher_alg);
- skcipher_inst->alg.ivsize = ESSIV_IV_SIZE;
- skcipher_inst->alg.chunksize = skcipher_alg->chunksize;
- skcipher_inst->alg.walksize = skcipher_alg->walksize;
+ skcipher_inst->alg.ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
+ skcipher_inst->alg.chunksize = crypto_skcipher_alg_chunksize(skcipher_alg);
+ skcipher_inst->alg.walksize = crypto_skcipher_alg_walksize(skcipher_alg);
skcipher_inst->free = essiv_skcipher_free_instance;
prev parent reply other threads:[~2019-06-26 4:49 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-21 8:09 [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation Ard Biesheuvel
2019-06-21 8:09 ` [PATCH v4 1/6] crypto: essiv - create wrapper template " Ard Biesheuvel
2019-06-21 8:09 ` [PATCH v4 2/6] fs: crypto: invoke crypto API for ESSIV handling Ard Biesheuvel
2019-06-21 8:09 ` [PATCH v4 3/6] md: dm-crypt: infer ESSIV block cipher from cipher string directly Ard Biesheuvel
2019-06-21 8:09 ` [PATCH v4 4/6] md: dm-crypt: switch to ESSIV crypto API template Ard Biesheuvel
2019-06-24 7:05 ` [dm-devel] " Milan Broz
2019-06-24 7:40 ` Surachai Saiwong
2019-06-21 8:09 ` [PATCH v4 5/6] crypto: essiv - add test vector for essiv(cbc(aes),aes,sha256) Ard Biesheuvel
2019-06-21 8:09 ` [PATCH v4 6/6] crypto: arm64/aes - implement accelerated ESSIV/CBC mode Ard Biesheuvel
2019-06-23 9:30 ` [dm-devel] [PATCH v4 0/6] crypto: switch to crypto API for ESSIV generation Ard Biesheuvel
2019-06-23 10:12 ` Herbert Xu
2019-06-24 6:52 ` Milan Broz
2019-06-26 4:49 ` Eric Biggers [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190626044942.GB23471@sol.localdomain \
--to=ebiggers@kernel.org \
--cc=ard.biesheuvel@linaro.org \
--cc=dm-devel@redhat.com \
--cc=gilad@benyossef.com \
--cc=gmazyland@gmail.com \
--cc=herbert@gondor.apana.org.au \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-fscrypt@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox