* [PATCH v2] SP800-38F / RFC3394 key wrapping
@ 2015-04-25 22:07 Stephan Mueller
2015-04-25 22:08 ` [PATCH v2] crypto: add key wrapping block chaining mode Stephan Mueller
2015-04-28 1:09 ` [PATCH v2] SP800-38F / RFC3394 key wrapping Herbert Xu
0 siblings, 2 replies; 20+ messages in thread
From: Stephan Mueller @ 2015-04-25 22:07 UTC (permalink / raw)
To: herbert; +Cc: linux-crypto
Hi,
Please note that this patch will conflict with the DRBG patch for
additional seeding sent earlier today. Both add test vectors in
testmgr.c between the existing hmac() and lrw() due to the ordering
requirements of testmgr.c.
Changes v2:
* Turn kw() into a blkcipher as suggested by Herbert Xu.
* Drop the support for user provided IV to initialize encrypt or
for performing the verify step during decrypt.
Stephan Mueller (1):
crypto: add key wrapping block chaining mode
crypto/Kconfig | 7 +
crypto/Makefile | 1 +
crypto/keywrap.c | 502 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
crypto/testmgr.c | 25 +++
crypto/testmgr.h | 41 +++++
5 files changed, 576 insertions(+)
create mode 100644 crypto/keywrap.c
--
2.1.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-25 22:07 [PATCH v2] SP800-38F / RFC3394 key wrapping Stephan Mueller
@ 2015-04-25 22:08 ` Stephan Mueller
2015-04-27 8:26 ` Herbert Xu
2015-04-27 8:29 ` Herbert Xu
2015-04-28 1:09 ` [PATCH v2] SP800-38F / RFC3394 key wrapping Herbert Xu
1 sibling, 2 replies; 20+ messages in thread
From: Stephan Mueller @ 2015-04-25 22:08 UTC (permalink / raw)
To: herbert; +Cc: linux-crypto
This patch implements the AES key wrapping as specified in
NIST SP800-38F and RFC3394.
The implementation covers key wrapping without padding. The caller may
provide an IV. If no IV is provided, the default IV defined in SP800-38F
is used for key wrapping and unwrapping.
The key wrapping is an authenticated encryption operation without
associated data. Therefore, setting of AAD is permissible, but that data
is not used by the cipher implementation.
Albeit the standards define the key wrapping for AES only, the template
can be used with any other block cipher that has a block size of 16
bytes.
Testing with CAVS test vectors for AES 128, 192, 256 in encryption and
decryption up to 4096 bytes plaintext has been conducted successfully.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
---
crypto/Kconfig | 7 +
crypto/Makefile | 1 +
crypto/keywrap.c | 502 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
crypto/testmgr.c | 25 +++
crypto/testmgr.h | 41 +++++
5 files changed, 576 insertions(+)
create mode 100644 crypto/keywrap.c
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 8aaf298..3d62d8a 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -295,6 +295,13 @@ config CRYPTO_XTS
key size 256, 384 or 512 bits. This implementation currently
can't handle a sectorsize which is not a multiple of 16 bytes.
+config CRYPTO_KEYWRAP
+ tristate "Key wrapping support"
+ select CRYPTO_BLKCIPHER
+ help
+ Support for key wrapping (NIST SP800-38F / RFC3394) without
+ padding.
+
comment "Hash modes"
config CRYPTO_CMAC
diff --git a/crypto/Makefile b/crypto/Makefile
index 97b7d3a..d2f4b69 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -56,6 +56,7 @@ obj-$(CONFIG_CRYPTO_CTS) += cts.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o
obj-$(CONFIG_CRYPTO_XTS) += xts.o
obj-$(CONFIG_CRYPTO_CTR) += ctr.o
+obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
obj-$(CONFIG_CRYPTO_GCM) += gcm.o
obj-$(CONFIG_CRYPTO_CCM) += ccm.o
obj-$(CONFIG_CRYPTO_PCRYPT) += pcrypt.o
diff --git a/crypto/keywrap.c b/crypto/keywrap.c
new file mode 100644
index 0000000..d70b0b3
--- /dev/null
+++ b/crypto/keywrap.c
@@ -0,0 +1,502 @@
+/*
+ * Key Wrapping: RFC3394 / NIST SP800-38F
+ *
+ * Implemented modes as defined in NIST SP800-38F: Kw
+ *
+ * Copyright (C) 2015, Stephan Mueller <smueller@chronox.de>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, and the entire permission notice in its entirety,
+ * including the disclaimer of warranties.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote
+ * products derived from this software without specific prior
+ * written permission.
+ *
+ * ALTERNATIVELY, this product may be distributed under the terms of
+ * the GNU General Public License, in which case the provisions of the GPL2
+ * are required INSTEAD OF the above restrictions. (This clause is
+ * necessary due to a potential bad interaction between the GPL and
+ * the restrictions contained in a BSD-style copyright.)
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+ * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+ * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+/*
+ * Note for using key wrapping:
+ *
+ * * The result of the encryption operation is the ciphertext starting
+ * with the 2nd semiblock. The first semiblock is provided as the IV.
+ * The IV uses to start the encryption operation is the default IV.
+ *
+ * * The input for the decryption is the first semiblock handed in as an
+ * IV. The ciphertext is the data starting with the 2nd semiblock. The
+ * return code of the decryption operation will be EBADMSG in case an
+ * integrity error occurs.
+ *
+ * To obtain the full result of an encryption as expected by SP800-38F, the
+ * caller must allocate a buffer of plaintext + 8 bytes:
+ *
+ * unsigned int datalen = ptlen + crypto_ablkcipher_ivsize(tfm);
+ * u8 data[datalen];
+ * u8 *iv = data;
+ * u8 *pt = data + crypto_ablkcipher_ivsize(tfm);
+ * <ensure that pt contains the plaintext of size ptlen>
+ * sg_init_one(&sg, ptdata, ptlen);
+ * ablkcipher_request_set_crypt(req, &sg, &sg, ptlen, iv);
+ *
+ * ==> After encryption, data now contains full KW result as per SP800-38F.
+ *
+ * In case of decryption, ciphertext now already has the expected length
+ * and must be segmented appropriately:
+ *
+ * unsigned int datalen = CTLEN;
+ * u8 data[datalen];
+ * <ensure that data contains full ciphertext>
+ * u8 *iv = data;
+ * u8 *ct = data + crypto_ablkcipher_ivsize(tfm);
+ * unsigned int ctlen = datalen - crypto_ablkcipher_ivsize(tfm);
+ * sg_init_one(&sg, ctdata, ctlen);
+ * ablkcipher_request_set_crypt(req, &sg, &sg, ptlen, iv);
+ *
+ * ==> After decryption (which hopefully does not return EBADMSG), the ct
+ * pointer now points to the plaintext of size ctlen.
+ */
+
+#include <linux/module.h>
+#include <linux/crypto.h>
+#include <linux/scatterlist.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/internal/skcipher.h>
+
+struct crypto_kw_ctx {
+ struct crypto_cipher *child;
+};
+
+struct crypto_rfc3394_ctx {
+ struct crypto_ablkcipher *child;
+};
+
+struct crypto_kw_block {
+#define SEMIBSIZE 8
+ u8 A[SEMIBSIZE];
+ u8 R[SEMIBSIZE];
+};
+
+/* convert 64 bit integer into its string representation */
+static inline void crypto_kw_cpu_to_be64(u64 val, u8 *buf)
+{
+ struct s {
+ __be64 conv;
+ };
+ struct s *conversion = (struct s *) buf;
+
+ conversion->conv = cpu_to_be64(val);
+}
+
+static inline void crypto_kw_copy_scatterlist(struct scatterlist *src,
+ struct scatterlist *dst)
+{
+ memcpy(dst, src, sizeof(struct scatterlist));
+}
+
+/* find the next memory block in scatter_walk of given size */
+static inline bool crypto_kw_scatterwalk_find(struct scatter_walk *walk,
+ unsigned int size)
+{
+ int n = scatterwalk_clamp(walk, size);
+
+ if (!n) {
+ scatterwalk_start(walk, sg_next(walk->sg));
+ n = scatterwalk_clamp(walk, size);
+ }
+ if (n != size)
+ return false;
+ return true;
+}
+
+/*
+ * Copy out the memory block from or to scatter_walk of requested size
+ * before the walk->offset pointer. The scatter_walk is processed in reverse.
+ */
+static bool crypto_kw_scatterwalk_memcpy_rev(struct scatter_walk *walk,
+ unsigned int *walklen,
+ u8 *buf, unsigned int bufsize,
+ bool out)
+{
+ u8 *ptr = NULL;
+
+ walk->offset -= bufsize;
+ if (!crypto_kw_scatterwalk_find(walk, bufsize))
+ return false;
+
+ ptr = scatterwalk_map(walk);
+ if (out)
+ memcpy(ptr, buf, bufsize);
+ else
+ memcpy(buf, ptr, bufsize);
+ *walklen -= bufsize;
+ scatterwalk_unmap(ptr);
+ scatterwalk_done(walk, 0, *walklen);
+
+ return true;
+}
+
+/*
+ * Copy the memory block from or to scatter_walk of requested size
+ * at the walk->offset pointer. The scatter_walk is processed forward.
+ */
+static bool crypto_kw_scatterwalk_memcpy(struct scatter_walk *walk,
+ unsigned int *walklen,
+ u8 *buf, unsigned int bufsize,
+ bool out)
+{
+ u8 *ptr = NULL;
+
+ if (!crypto_kw_scatterwalk_find(walk, bufsize))
+ return false;
+
+ ptr = scatterwalk_map(walk);
+ if (out)
+ memcpy(ptr, buf, bufsize);
+ else
+ memcpy(buf, ptr, bufsize);
+ *walklen -= bufsize;
+ scatterwalk_unmap(ptr);
+ scatterwalk_advance(walk, bufsize);
+ scatterwalk_done(walk, 0, *walklen);
+
+ return true;
+}
+
+static int crypto_kw_decrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_kw_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+
+ unsigned long alignmask = max_t(unsigned long, 4,
+ crypto_cipher_alignmask(child));
+ unsigned int src_nbytes, dst_nbytes, i;
+ struct scatter_walk src_walk, dst_walk;
+
+ u8 blockbuf[sizeof(struct crypto_kw_block) + alignmask];
+ struct crypto_kw_block *block = (struct crypto_kw_block *)
+ PTR_ALIGN(blockbuf + 0, alignmask + 1);
+
+ u8 tmpblock[SEMIBSIZE];
+ u64 t = 6 * ((nbytes) >> 3);
+ int ret = 0;
+ struct scatterlist lsrc, ldst;
+
+ /*
+ * Require at least 2 semiblocks (note, the 3rd semiblock that is
+ * required by SP800-38F is the IV.
+ */
+ if (nbytes < (2 * SEMIBSIZE) || nbytes % 8)
+ return -EINVAL;
+ memcpy(block->A, desc->info, SEMIBSIZE);
+ /*
+ * src scatterlist is read only. dst scatterlist is r/w. During the
+ * first loop, src points to req->src and dst to req->dst. For any
+ * subsequent round, the code operates on req->dst only.
+ */
+ crypto_kw_copy_scatterlist(src, &lsrc);
+ crypto_kw_copy_scatterlist(dst, &ldst);
+
+ for (i = 0; i < 6; i++) {
+ u8 tbe_buffer[SEMIBSIZE + alignmask];
+ /* alignment for the crypto_xor operation */
+ u8 *tbe = PTR_ALIGN(tbe_buffer + 0, alignmask + 1);
+ bool first_loop = true;
+
+ scatterwalk_start(&src_walk, &lsrc);
+ scatterwalk_start(&dst_walk, &ldst);
+ src_nbytes = dst_nbytes = nbytes;
+
+ /*
+ * Point to the end of the scatterlists to walk them backwards.
+ */
+ src_walk.offset += src_nbytes;
+ dst_walk.offset += dst_nbytes;
+
+ while (src_nbytes) {
+ if (!crypto_kw_scatterwalk_memcpy_rev(&src_walk,
+ &src_nbytes, block->R, SEMIBSIZE, false))
+ goto out;
+ crypto_kw_cpu_to_be64(t, tbe);
+ crypto_xor(block->A, tbe, SEMIBSIZE);
+ t--;
+ crypto_cipher_decrypt_one(child, (u8*)block,
+ (u8*)block);
+ if (!first_loop) {
+ /*
+ * Copy block->R from last round into
+ * place.
+ */
+ if (!crypto_kw_scatterwalk_memcpy_rev(&dst_walk,
+ &dst_nbytes, tmpblock, SEMIBSIZE, true))
+ goto out;
+ } else {
+ first_loop = false;
+ }
+
+ /*
+ * Store current block->R in temp buffer to
+ * copy it in place in the next round.
+ */
+ memcpy(&tmpblock, block->R, SEMIBSIZE);
+ }
+
+ /* process the final block->R */
+ if (!crypto_kw_scatterwalk_memcpy_rev(&dst_walk, &dst_nbytes,
+ tmpblock, SEMIBSIZE, true))
+ goto out;
+
+ /* we now start to operate on the dst buffers only */
+ crypto_kw_copy_scatterlist(dst, &lsrc);
+ crypto_kw_copy_scatterlist(dst, &ldst);
+ }
+
+ if (crypto_memneq("\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6", block->A,
+ SEMIBSIZE))
+ ret = -EBADMSG;
+
+out:
+ memzero_explicit(&block, sizeof(struct crypto_kw_block));
+ memzero_explicit(tmpblock, sizeof(tmpblock));
+
+ return ret;
+}
+
+static int crypto_kw_encrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst, struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct crypto_blkcipher *tfm = desc->tfm;
+ struct crypto_kw_ctx *ctx = crypto_blkcipher_ctx(tfm);
+ struct crypto_cipher *child = ctx->child;
+
+ unsigned long alignmask = max_t(unsigned long, 4,
+ crypto_cipher_alignmask(child));
+ unsigned int src_nbytes, dst_nbytes, i;
+ struct scatter_walk src_walk, dst_walk;
+
+ u8 blockbuf[sizeof(struct crypto_kw_block) + alignmask];
+ struct crypto_kw_block *block = (struct crypto_kw_block *)
+ PTR_ALIGN(blockbuf + 0, alignmask + 1);
+
+ u8 tmpblock[SEMIBSIZE];
+ u64 t = 1;
+ struct scatterlist lsrc, ldst;
+ int ret = -EAGAIN;
+
+ /*
+ * Require at least 2 semiblocks (note, the 3rd semiblock that is
+ * required by SP800-38F is the IV that occupies the first semiblock.
+ * This means that the dst memory must be one semiblock larger than src.
+ * Also ensure that the given data is aligned to semiblock.
+ */
+ if (nbytes < (2 * SEMIBSIZE) || nbytes % 8)
+ return -EINVAL;
+
+ memcpy(block->A, "\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6", SEMIBSIZE);
+
+ /*
+ * src scatterlist is read only. dst scatterlist is r/w. During the
+ * first loop, src points to req->src and dst to req->dst. For any
+ * subsequent round, the code operates on req->dst only.
+ */
+ crypto_kw_copy_scatterlist(src, &lsrc);
+ crypto_kw_copy_scatterlist(dst, &ldst);
+
+ for (i = 0; i < 6; i++) {
+ u8 tbe_buffer[SEMIBSIZE + alignmask];
+ u8 *tbe = PTR_ALIGN(tbe_buffer + 0, alignmask + 1);
+ bool first_loop = true;
+
+ scatterwalk_start(&src_walk, &lsrc);
+ scatterwalk_start(&dst_walk, &ldst);
+ src_nbytes = dst_nbytes = nbytes;
+
+ while (src_nbytes) {
+ if (!crypto_kw_scatterwalk_memcpy(&src_walk,
+ &src_nbytes, block->R, SEMIBSIZE, false))
+ goto out;
+ crypto_cipher_encrypt_one(child, (u8 *)block,
+ (u8 *)block);
+ crypto_kw_cpu_to_be64(t, tbe);
+ crypto_xor(block->A, tbe, SEMIBSIZE);
+ t++;
+ if (!first_loop) {
+ /*
+ * Copy block->R from last round into
+ * place.
+ */
+ if (!crypto_kw_scatterwalk_memcpy(&dst_walk,
+ &dst_nbytes, tmpblock, SEMIBSIZE, true))
+ goto out;
+ } else {
+ first_loop = false;
+ }
+
+ /*
+ * Store current block->R in temp buffer to
+ * copy it in place in the next round.
+ */
+ memcpy(&tmpblock, block->R, SEMIBSIZE);
+ }
+
+ /* process the final block->R */
+ if (!crypto_kw_scatterwalk_memcpy(&dst_walk, &dst_nbytes,
+ tmpblock, SEMIBSIZE, true))
+ goto out;
+
+ /* we now start to operate on the dst buffers only */
+ crypto_kw_copy_scatterlist(dst, &lsrc);
+ crypto_kw_copy_scatterlist(dst, &ldst);
+ }
+
+ /* establish the final IV */
+ memcpy(desc->info, block->A, SEMIBSIZE);
+
+ ret = 0;
+out:
+ memzero_explicit(&block, sizeof(struct crypto_kw_block));
+ memzero_explicit(tmpblock, sizeof(tmpblock));
+ return ret;
+}
+
+static int crypto_kw_setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct crypto_kw_ctx *ctx = crypto_tfm_ctx(parent);
+ struct crypto_cipher *child = ctx->child;
+ int err;
+
+ crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+ crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_cipher_setkey(child, key, keylen);
+ crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
+ CRYPTO_TFM_RES_MASK);
+ return err;
+}
+
+static int crypto_kw_init_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
+ struct crypto_spawn *spawn = crypto_instance_ctx(inst);
+ struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_cipher *cipher;
+
+ cipher = crypto_spawn_cipher(spawn);
+ if (IS_ERR(cipher))
+ return PTR_ERR(cipher);
+
+ ctx->child = cipher;
+ return 0;
+}
+
+static void crypto_kw_exit_tfm(struct crypto_tfm *tfm)
+{
+ struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ crypto_free_cipher(ctx->child);
+}
+
+static struct crypto_instance *crypto_kw_alloc(struct rtattr **tb)
+{
+ struct crypto_instance *inst = NULL;
+ struct crypto_alg *alg = NULL;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
+ if (err)
+ return ERR_PTR(err);
+
+ alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ return ERR_CAST(alg);
+
+ inst = ERR_PTR(-EINVAL);
+ /* Section 5.1 requirement for KW and KWP */
+ if (alg->cra_blocksize != 2 * SEMIBSIZE)
+ goto err;
+
+ inst = crypto_alloc_instance("kw", alg);
+ if (IS_ERR(inst))
+ goto err;
+
+ inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
+ inst->alg.cra_priority = alg->cra_priority;
+ inst->alg.cra_blocksize = SEMIBSIZE;
+ inst->alg.cra_alignmask = 0;
+ inst->alg.cra_type = &crypto_blkcipher_type;
+ inst->alg.cra_blkcipher.ivsize = SEMIBSIZE;
+ inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
+ inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
+
+ inst->alg.cra_ctxsize = sizeof(struct crypto_kw_ctx);
+
+ inst->alg.cra_init = crypto_kw_init_tfm;
+ inst->alg.cra_exit = crypto_kw_exit_tfm;
+
+ inst->alg.cra_blkcipher.setkey = crypto_kw_setkey;
+ inst->alg.cra_blkcipher.encrypt = crypto_kw_encrypt;
+ inst->alg.cra_blkcipher.decrypt = crypto_kw_decrypt;
+
+err:
+ crypto_mod_put(alg);
+ return inst;
+}
+
+static void crypto_kw_free(struct crypto_instance *inst)
+{
+ crypto_drop_spawn(crypto_instance_ctx(inst));
+ kfree(inst);
+}
+
+static struct crypto_template crypto_kw_tmpl = {
+ .name = "kw",
+ .alloc = crypto_kw_alloc,
+ .free = crypto_kw_free,
+ .module = THIS_MODULE,
+};
+
+static int __init crypto_kw_init(void)
+{
+ return crypto_register_template(&crypto_kw_tmpl);
+}
+
+static void __exit crypto_kw_exit(void)
+{
+ crypto_unregister_template(&crypto_kw_tmpl);
+}
+
+module_init(crypto_kw_init);
+module_exit(crypto_kw_exit);
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Stephan Mueller <smueller@chronox.de>");
+MODULE_DESCRIPTION("Key Wrapping (RFC3394 / NIST SP800-38F)");
+MODULE_ALIAS_CRYPTO("kw");
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index d463978..4744437 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1021,6 +1021,15 @@ static int __test_skcipher(struct crypto_ablkcipher *tfm, int enc,
ret = -EINVAL;
goto out;
}
+ if (template[i].ivout &&
+ memcmp(req->info, template[i].ivout,
+ crypto_ablkcipher_ivsize(tfm))) {
+ pr_err("alg: skcipher%s: IV-test %d failed on %s for %s\n",
+ d, j, e, algo);
+ hexdump(req->info, crypto_ablkcipher_ivsize(tfm));
+ ret = -EINVAL;
+ goto out;
+ }
}
j = 0;
@@ -3097,6 +3106,22 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+ .alg = "kw(aes)",
+ .test = alg_test_skcipher,
+ .fips_allowed = 1,
+ .suite = {
+ .cipher = {
+ .enc = {
+ .vecs = aes_kw_enc_tv_template,
+ .count = ARRAY_SIZE(aes_kw_enc_tv_template)
+ },
+ .dec = {
+ .vecs = aes_kw_dec_tv_template,
+ .count = ARRAY_SIZE(aes_kw_dec_tv_template)
+ }
+ }
+ }
+ }, {
.alg = "lrw(aes)",
.test = alg_test_skcipher,
.suite = {
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 62e2485..a9845fc 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -49,6 +49,7 @@ struct hash_testvec {
struct cipher_testvec {
char *key;
char *iv;
+ char *ivout;
char *input;
char *result;
unsigned short tap[MAX_TAP];
@@ -20704,6 +20705,46 @@ static struct aead_testvec aes_ccm_rfc4309_dec_tv_template[] = {
};
/*
+ * All key wrapping test vectors taken from
+ * http://csrc.nist.gov/groups/STM/cavp/documents/mac/kwtestvectors.zip
+ *
+ * Note: as documented in keywrap.c, the ivout for encryption is the first
+ * semiblock of the ciphertext from the test vector. For decryption, iv is
+ * the first semiblock of the ciphertext.
+ */
+static struct cipher_testvec aes_kw_enc_tv_template[] = {
+ {
+ .key = "\x75\x75\xda\x3a\x93\x60\x7c\xc2"
+ "\xbf\xd8\xce\xc7\xaa\xdf\xd9\xa6",
+ .klen = 16,
+ .input = "\x42\x13\x6d\x3c\x38\x4a\x3e\xea"
+ "\xc9\x5a\x06\x6f\xd2\x8f\xed\x3f",
+ .ilen = 16,
+ .result = "\xf6\x85\x94\x81\x6f\x64\xca\xa3"
+ "\xf5\x6f\xab\xea\x25\x48\xf5\xfb",
+ .rlen = 16,
+ .ivout = "\x03\x1f\x6b\xd7\xe6\x1e\x64\x3d",
+ },
+};
+
+static struct cipher_testvec aes_kw_dec_tv_template[] = {
+ {
+ .key = "\x80\xaa\x99\x73\x27\xa4\x80\x6b"
+ "\x6a\x7a\x41\xa5\x2b\x86\xc3\x71"
+ "\x03\x86\xf9\x32\x78\x6e\xf7\x96"
+ "\x76\xfa\xfb\x90\xb8\x26\x3c\x5f",
+ .klen = 32,
+ .input = "\xd3\x3d\x3d\x97\x7b\xf0\xa9\x15"
+ "\x59\xf9\x9c\x8a\xcd\x29\x3d\x43",
+ .ilen = 16,
+ .result = "\x0a\x25\x6b\xa7\x5c\xfa\x03\xaa"
+ "\xa0\x2b\xa9\x42\x03\xf1\x5b\xaa",
+ .rlen = 16,
+ .iv = "\x42\x3c\x96\x0d\x8a\x2a\xc4\xc1",
+ },
+};
+
+/*
* ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode)
* test vectors, taken from Appendix B.2.9 and B.2.10:
* http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf
--
2.1.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-25 22:08 ` [PATCH v2] crypto: add key wrapping block chaining mode Stephan Mueller
@ 2015-04-27 8:26 ` Herbert Xu
2015-04-27 14:34 ` Stephan Mueller
2015-04-27 8:29 ` Herbert Xu
1 sibling, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-04-27 8:26 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Sun, Apr 26, 2015 at 12:08:20AM +0200, Stephan Mueller wrote:
>
> + /*
> + * Point to the end of the scatterlists to walk them backwards.
> + */
> + src_walk.offset += src_nbytes;
> + dst_walk.offset += dst_nbytes;
This doesn't work. Our primitives don't support walking backwards
over an SG list and what you have simply doesn't work except for the
trivial case of a completely linear buffer.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-25 22:08 ` [PATCH v2] crypto: add key wrapping block chaining mode Stephan Mueller
2015-04-27 8:26 ` Herbert Xu
@ 2015-04-27 8:29 ` Herbert Xu
2015-04-27 14:58 ` Stephan Mueller
1 sibling, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-04-27 8:29 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Sun, Apr 26, 2015 at 12:08:20AM +0200, Stephan Mueller wrote:
> This patch implements the AES key wrapping as specified in
> NIST SP800-38F and RFC3394.
This is my attempt at turning kw into a givcipher. The encrypt
part is complete but untested as I gave up after finding the
reverse SG problem with your decrypt code.
/*
* Key Wrapping: RFC3394 / NIST SP800-38F
*
* Implemented modes as defined in NIST SP800-38F: Kw
*
* Copyright (C) 2015, Stephan Mueller <smueller@chronox.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, and the entire permission notice in its entirety,
* including the disclaimer of warranties.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the author may not be used to endorse or promote
* products derived from this software without specific prior
* written permission.
*
* ALTERNATIVELY, this product may be distributed under the terms of
* the GNU General Public License, in which case the provisions of the GPL2
* are required INSTEAD OF the above restrictions. (This clause is
* necessary due to a potential bad interaction between the GPL and
* the restrictions contained in a BSD-style copyright.)
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
* WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
* OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
* BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
* USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
* DAMAGE.
*/
/*
* Note for using key wrapping:
*
* * The result of the encryption operation is the ciphertext starting
* with the 2nd semiblock. The first semiblock is provided as the IV.
* The IV uses to start the encryption operation is the default IV.
*
* * The input for the decryption is the first semiblock handed in as an
* IV. The ciphertext is the data starting with the 2nd semiblock. The
* return code of the decryption operation will be EBADMSG in case an
* integrity error occurs.
*
* To obtain the full result of an encryption as expected by SP800-38F, the
* caller must allocate a buffer of plaintext + 8 bytes:
*
* unsigned int datalen = ptlen + crypto_ablkcipher_ivsize(tfm);
* u8 data[datalen];
* u8 *iv = data;
* u8 *pt = data + crypto_ablkcipher_ivsize(tfm);
* <ensure that pt contains the plaintext of size ptlen>
* sg_init_one(&sg, ptdata, ptlen);
* ablkcipher_request_set_crypt(req, &sg, &sg, ptlen, iv);
*
* ==> After encryption, data now contains full KW result as per SP800-38F.
*
* In case of decryption, ciphertext now already has the expected length
* and must be segmented appropriately:
*
* unsigned int datalen = CTLEN;
* u8 data[datalen];
* <ensure that data contains full ciphertext>
* u8 *iv = data;
* u8 *ct = data + crypto_ablkcipher_ivsize(tfm);
* unsigned int ctlen = datalen - crypto_ablkcipher_ivsize(tfm);
* sg_init_one(&sg, ctdata, ctlen);
* ablkcipher_request_set_crypt(req, &sg, &sg, ptlen, iv);
*
* ==> After decryption (which hopefully does not return EBADMSG), the ct
* pointer now points to the plaintext of size ctlen.
*/
#include <crypto/internal/skcipher.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
struct crypto_kw_ctx {
struct crypto_cipher *child;
};
struct crypto_rfc3394_ctx {
struct crypto_ablkcipher *child;
};
struct crypto_kw_block {
#define SEMIBSIZE sizeof(be64)
union {
struct {
be64 A;
be64 R;
};
u8 V[];
};
};
/* convert 64 bit integer into its string representation */
static inline void crypto_kw_cpu_to_be64(u64 val, u8 *buf)
{
struct s {
__be64 conv;
};
struct s *conversion = (struct s *) buf;
conversion->conv = cpu_to_be64(val);
}
static inline void crypto_kw_copy_scatterlist(struct scatterlist *src,
struct scatterlist *dst)
{
memcpy(dst, src, sizeof(struct scatterlist));
}
/* find the next memory block in scatter_walk of given size */
static inline bool crypto_kw_scatterwalk_find(struct scatter_walk *walk,
unsigned int size)
{
int n = scatterwalk_clamp(walk, size);
if (!n) {
scatterwalk_start(walk, sg_next(walk->sg));
n = scatterwalk_clamp(walk, size);
}
if (n != size)
return false;
return true;
}
/*
* Copy out the memory block from or to scatter_walk of requested size
* before the walk->offset pointer. The scatter_walk is processed in reverse.
*/
static bool crypto_kw_scatterwalk_memcpy_rev(struct scatter_walk *walk,
unsigned int *walklen,
u8 *buf, unsigned int bufsize,
bool out)
{
u8 *ptr = NULL;
walk->offset -= bufsize;
if (!crypto_kw_scatterwalk_find(walk, bufsize))
return false;
ptr = scatterwalk_map(walk);
if (out)
memcpy(ptr, buf, bufsize);
else
memcpy(buf, ptr, bufsize);
*walklen -= bufsize;
scatterwalk_unmap(ptr);
scatterwalk_done(walk, 0, *walklen);
return true;
}
/*
* Copy the memory block from or to scatter_walk of requested size
* at the walk->offset pointer. The scatter_walk is processed forward.
*/
static bool crypto_kw_scatterwalk_memcpy(struct scatter_walk *walk,
unsigned int *walklen,
u8 *buf, unsigned int bufsize,
bool out)
{
u8 *ptr = NULL;
if (!crypto_kw_scatterwalk_find(walk, bufsize))
return false;
ptr = scatterwalk_map(walk);
if (out)
memcpy(ptr, buf, bufsize);
else
memcpy(buf, ptr, bufsize);
*walklen -= bufsize;
scatterwalk_unmap(ptr);
scatterwalk_advance(walk, bufsize);
scatterwalk_done(walk, 0, *walklen);
return true;
}
static int crypto_kw_decrypt(struct ablkcipher_request *req)
{
struct scatterlist *src = req->src;
struct scatterlist *dst = req->dst;
unsigned int nbytes = req->nbytes;
struct crypto_ablkcipher *tfm = crypto_ablkcipher_tfm(req);
struct crypto_kw_ctx *ctx = crypto_ablkcipher_ctx(tfm);
struct crypto_cipher *child = ctx->child;
unsigned long alignmask = crypto_cipher_alignmask(child) | 7;
unsigned int i;
struct blkcipher_walk walk;
u8 blockbuf[sizeof(struct crypto_kw_block) + alignmask];
struct crypto_kw_block *block = (struct crypto_kw_block *)
PTR_ALIGN(blockbuf + 0, alignmask + 1);
u64 t = 6 * ((nbytes) >> 3);
int ret;
/*
* Require at least 2 semiblocks (note, the 3rd semiblock that is
* required by SP800-38F is the IV.
*/
if (nbytes < (2 * SEMIBSIZE) || nbytes % SEMIBSIZE)
return -EINVAL;
/*
* src scatterlist is read only. dst scatterlist is r/w. During the
* first loop, src points to req->src and dst to req->dst. For any
* subsequent round, the code operates on req->dst only.
*/
for (i = 0; i < 6; i++) {
be64 tbe;
blkcipher_walk_init(&walk, dst, src, nbytes);
ret = blkcipher_walk_virt_ablkcipher(req, &walk);
if (ret)
goto out;
/*
* Point to the end of the scatterlists to walk them backwards.
*/
src_walk.offset += src_nbytes;
dst_walk.offset += dst_nbytes;
while (src_nbytes) {
if (!crypto_kw_scatterwalk_memcpy_rev(&src_walk,
&src_nbytes, block->R, SEMIBSIZE, false))
goto out;
crypto_kw_cpu_to_be64(t, tbe);
crypto_xor(block->A, tbe, SEMIBSIZE);
t--;
crypto_cipher_decrypt_one(child, (u8*)block,
(u8*)block);
if (!first_loop) {
/*
* Copy block->R from last round into
* place.
*/
if (!crypto_kw_scatterwalk_memcpy_rev(&dst_walk,
&dst_nbytes, tmpblock, SEMIBSIZE, true))
goto out;
} else {
first_loop = false;
}
/*
* Store current block->R in temp buffer to
* copy it in place in the next round.
*/
memcpy(&tmpblock, block->R, SEMIBSIZE);
}
/* process the final block->R */
if (!crypto_kw_scatterwalk_memcpy_rev(&dst_walk, &dst_nbytes,
tmpblock, SEMIBSIZE, true))
goto out;
/* we now start to operate on the dst buffers only */
crypto_kw_copy_scatterlist(dst, &lsrc);
crypto_kw_copy_scatterlist(dst, &ldst);
}
if (crypto_memneq("\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6", block->A,
SEMIBSIZE))
ret = -EBADMSG;
out:
memzero_explicit(&block, sizeof(struct crypto_kw_block));
memzero_explicit(tmpblock, sizeof(tmpblock));
return ret;
}
static int crypto_kw_encrypt(struct ablkcipher_request *req)
{
struct scatterlist *src = req->src;
struct scatterlist *dst = req->dst;
unsigned int nbytes = req->nbytes;
struct crypto_ablkcipher *tfm = crypto_ablkcipher_tfm(req);
struct crypto_kw_ctx *ctx = crypto_ablkcipher_ctx(tfm);
struct crypto_cipher *child = ctx->child;
unsigned long alignmask = crypto_cipher_alignmask(child) | 7;
unsigned int i;
struct blkcipher_walk walk;
u8 blockbuf[sizeof(struct crypto_kw_block) + alignmask];
struct crypto_kw_block *block = (struct crypto_kw_block *)
PTR_ALIGN(blockbuf + 0, alignmask + 1);
u64 t = 1;
int ret;
/*
* Require at least 2 semiblocks (note, the 3rd semiblock that is
* required by SP800-38F is the IV that occupies the first semiblock.
* This means that the dst memory must be one semiblock larger than src.
* Also ensure that the given data is aligned to semiblock.
*/
if (nbytes < (2 * SEMIBSIZE) || nbytes % SEMIBSIZE)
return -EINVAL;
/*
* src scatterlist is read only. dst scatterlist is r/w. During the
* first loop, src points to req->src and dst to req->dst. For any
* subsequent round, the code operates on req->dst only.
*/
for (i = 0; i < 6; i++) {
be64 tbe;
blkcipher_walk_init(&walk, dst, src, nbytes);
ret = blkcipher_walk_virt_ablkcipher(req, &walk);
if (ret)
goto out;
while (walk.nbytes) {
unsigned int leftover = walk.nbytes;
be64 *vsrc = (be64 *)walk->src.virt.addr;
block->A = *(be64 *)walk->iv;
do {
block->R = *vsrc++;
crypto_cipher_encrypt_one(child, block->V,
block->V);
*vdst++ = block->R;
tbe = cpu_to_be64(t++);
crypto_xor(block->A, tbe, SEMIBSIZE);
} while ((leftover -= SEMIBSIZE) >= bsize);
*(be64 *)walk->iv = block->A;
ret = blkcipher_walk_done(desc, &walk, nbytes);
if (ret)
goto out;
}
/* we now start to operate on the dst buffers only */
dst = src;
}
ret = 0;
out:
memzero_explicit(&block, sizeof(struct crypto_kw_block));
return ret;
}
static int crypto_kw_givencrypt(struct skcipher_givcrypt_request *req)
{
memcpy(req->giv, "\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6", SEMIBSIZE);
memcpy(req->creq.info, req->giv, SEMIBSIZE);
return crypto_kw_encrypt(&req->creq);
}
static int crypto_kw_givdecrypt(struct skcipher_givcrypt_request *req)
{
int err = crypto_kw_decrypt(&req->creq);
if (err)
return err;
return memcmp(req->creq.info, "\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6",
SEMIBSIZE) ? -EBADMSG : 0;
}
static int crypto_kw_setkey(struct crypto_tfm *parent, const u8 *key,
unsigned int keylen)
{
struct crypto_kw_ctx *ctx = crypto_tfm_ctx(parent);
struct crypto_cipher *child = ctx->child;
int err;
crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
CRYPTO_TFM_REQ_MASK);
err = crypto_cipher_setkey(child, key, keylen);
crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
CRYPTO_TFM_RES_MASK);
return err;
}
static int crypto_kw_init_tfm(struct crypto_tfm *tfm)
{
struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
struct crypto_spawn *spawn = crypto_instance_ctx(inst);
struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm);
struct crypto_cipher *cipher;
cipher = crypto_spawn_cipher(spawn);
if (IS_ERR(cipher))
return PTR_ERR(cipher);
ctx->child = cipher;
return 0;
}
static void crypto_kw_exit_tfm(struct crypto_tfm *tfm)
{
struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm);
crypto_free_cipher(ctx->child);
}
static struct crypto_instance *crypto_kw_alloc(struct rtattr **tb)
{
struct crypto_instance *inst = NULL;
struct crypto_alg *alg = NULL;
int err;
err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_GIVCIPHER |
CRYPTO_ALG_GENIV);
if (err)
return ERR_PTR(err);
alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK);
if (IS_ERR(alg))
return ERR_CAST(alg);
inst = ERR_PTR(-EINVAL);
/* Section 5.1 requirement for KW and KWP */
if (alg->cra_blocksize != 2 * SEMIBSIZE)
goto err;
inst = crypto_alloc_instance("kw", alg);
if (IS_ERR(inst))
goto err;
inst->alg.cra_flags = CRYPTO_ALG_TYPE_GIVCIPHER | CRYPTO_ALG_GENIV;
inst->alg.cra_priority = alg->cra_priority;
inst->alg.cra_blocksize = SEMIBSIZE;
inst->alg.cra_alignmask = 7;
inst->alg.cra_type = &crypto_givcipher_type;
inst->alg.cra_ablkcipher.ivsize = SEMIBSIZE;
inst->alg.cra_ablkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
inst->alg.cra_ablkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
inst->alg.cra_ctxsize = sizeof(struct crypto_kw_ctx);
inst->alg.cra_init = crypto_kw_init_tfm;
inst->alg.cra_exit = crypto_kw_exit_tfm;
inst->alg.cra_ablkcipher.setkey = crypto_kw_setkey;
inst->alg.cra_ablkcipher.encrypt = crypto_kw_encrypt;
inst->alg.cra_ablkcipher.decrypt = crypto_kw_decrypt;
inst->alg.cra_ablkcipher.givencrypt = crypto_kw_givencrypt;
inst->alg.cra_ablkcipher.givdecrypt = crypto_kw_givdecrypt;
err:
crypto_mod_put(alg);
return inst;
}
static void crypto_kw_free(struct crypto_instance *inst)
{
crypto_drop_spawn(crypto_instance_ctx(inst));
kfree(inst);
}
static struct crypto_template crypto_kw_tmpl = {
.name = "kw",
.alloc = crypto_kw_alloc,
.free = crypto_kw_free,
.module = THIS_MODULE,
};
static int __init crypto_kw_init(void)
{
return crypto_register_template(&crypto_kw_tmpl);
}
static void __exit crypto_kw_exit(void)
{
crypto_unregister_template(&crypto_kw_tmpl);
}
module_init(crypto_kw_init);
module_exit(crypto_kw_exit);
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Stephan Mueller <smueller@chronox.de>");
MODULE_DESCRIPTION("Key Wrapping (RFC3394 / NIST SP800-38F)");
MODULE_ALIAS_CRYPTO("kw");
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-27 8:26 ` Herbert Xu
@ 2015-04-27 14:34 ` Stephan Mueller
2015-04-28 1:10 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-04-27 14:34 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Montag, 27. April 2015, 16:26:07 schrieb Herbert Xu:
Hi Herbert,
>On Sun, Apr 26, 2015 at 12:08:20AM +0200, Stephan Mueller wrote:
>> + /*
>> + * Point to the end of the scatterlists to walk them
backwards.
>> + */
>> + src_walk.offset += src_nbytes;
>> + dst_walk.offset += dst_nbytes;
>
>This doesn't work. Our primitives don't support walking backwards
>over an SG list and what you have simply doesn't work except for the
>trivial case of a completely linear buffer.
Why do you think that will not work? I thought that the code works when the
non-linear scatterlists are at least broken at an 8 byte boundary.
>
>Cheers,
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-27 8:29 ` Herbert Xu
@ 2015-04-27 14:58 ` Stephan Mueller
2015-04-28 1:12 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-04-27 14:58 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Montag, 27. April 2015, 16:29:35 schrieb Herbert Xu:
Hi Herbert,
>On Sun, Apr 26, 2015 at 12:08:20AM +0200, Stephan Mueller wrote:
>> This patch implements the AES key wrapping as specified in
>> NIST SP800-38F and RFC3394.
>
>This is my attempt at turning kw into a givcipher. The encrypt
>part is complete but untested as I gave up after finding the
>reverse SG problem with your decrypt code.
Is it that easy? I was struggling to understand what to do in the alloc
function.
Thank you very much for that hint.
>static int crypto_kw_givdecrypt(struct skcipher_givcrypt_request *req)
>{
> int err = crypto_kw_decrypt(&req->creq);
>
> if (err)
> return err;
>
> return memcmp(req->creq.info, "\xA6\xA6\xA6\xA6\xA6\xA6\xA6\xA6",
> SEMIBSIZE) ? -EBADMSG : 0;
This memcmp implies that the final block->A from the decrypt is memcpy'ed to
req->creq.info. I wanted to avoid any additional memcpy calls to not hurt
performance even more.
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-04-25 22:07 [PATCH v2] SP800-38F / RFC3394 key wrapping Stephan Mueller
2015-04-25 22:08 ` [PATCH v2] crypto: add key wrapping block chaining mode Stephan Mueller
@ 2015-04-28 1:09 ` Herbert Xu
2015-04-28 2:45 ` Stephan Mueller
1 sibling, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-04-28 1:09 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Sun, Apr 26, 2015 at 12:07:31AM +0200, Stephan Mueller wrote:
> Hi,
>
> Please note that this patch will conflict with the DRBG patch for
> additional seeding sent earlier today. Both add test vectors in
> testmgr.c between the existing hmac() and lrw() due to the ordering
> requirements of testmgr.c.
Can you clarify the use case of this algorithm? In particular,
who is going to use it in the kernel? This doesn't seem to be
a candidate for use via algif since there aren't any or aren't
likely going to be any hardware implementations.
If we can narrow down who is going to use it perhaps we can then
figure out the appropriate interface for this.
Thanks,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-27 14:34 ` Stephan Mueller
@ 2015-04-28 1:10 ` Herbert Xu
2015-04-28 2:35 ` Stephan Mueller
0 siblings, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-04-28 1:10 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Mon, Apr 27, 2015 at 04:34:19PM +0200, Stephan Mueller wrote:
>
> Why do you think that will not work? I thought that the code works when the
> non-linear scatterlists are at least broken at an 8 byte boundary.
There is no guarantee that SG lists are set at 8-byte boundaries.
In fact, you need to be able to handle any SG list, including the
worst-case 1-byte per-entry SG lists.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-27 14:58 ` Stephan Mueller
@ 2015-04-28 1:12 ` Herbert Xu
0 siblings, 0 replies; 20+ messages in thread
From: Herbert Xu @ 2015-04-28 1:12 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Mon, Apr 27, 2015 at 04:58:51PM +0200, Stephan Mueller wrote:
>
> This memcmp implies that the final block->A from the decrypt is memcpy'ed to
> req->creq.info. I wanted to avoid any additional memcpy calls to not hurt
> performance even more.
I was hoping to directly use req->creq.info in the calculation.
The blkcipher_walk code would handle the alignment for it. But
obviously the backward walking issue threw a spanner in the works.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-28 1:10 ` Herbert Xu
@ 2015-04-28 2:35 ` Stephan Mueller
2015-04-28 2:50 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-04-28 2:35 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Dienstag, 28. April 2015, 09:10:47 schrieb Herbert Xu:
Hi Herbert,
> On Mon, Apr 27, 2015 at 04:34:19PM +0200, Stephan Mueller wrote:
> > Why do you think that will not work? I thought that the code works when
> > the
> > non-linear scatterlists are at least broken at an 8 byte boundary.
>
> There is no guarantee that SG lists are set at 8-byte boundaries.
> In fact, you need to be able to handle any SG list, including the
> worst-case 1-byte per-entry SG lists.
In this case, shouldn't we just have a loop where:
1. from the given endpoint, we go a semiblock back
2. now we see how many bytes we get when fetching the SG list till the end,
3a. if answer from 2 is semiblock or larger -> fetch it and exit
3b. if answer from 2 is less than a semiblock, fetch the available data,
advance to the next SGL and go to step 2 to try to fetch semiblock - obtained
data.
--
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-04-28 1:09 ` [PATCH v2] SP800-38F / RFC3394 key wrapping Herbert Xu
@ 2015-04-28 2:45 ` Stephan Mueller
2015-04-28 2:54 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-04-28 2:45 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Dienstag, 28. April 2015, 09:09:41 schrieb Herbert Xu:
Hi Herbert,
> On Sun, Apr 26, 2015 at 12:07:31AM +0200, Stephan Mueller wrote:
> > Hi,
> >
> > Please note that this patch will conflict with the DRBG patch for
> > additional seeding sent earlier today. Both add test vectors in
> > testmgr.c between the existing hmac() and lrw() due to the ordering
> > requirements of testmgr.c.
>
> Can you clarify the use case of this algorithm? In particular,
> who is going to use it in the kernel? This doesn't seem to be
> a candidate for use via algif since there aren't any or aren't
> likely going to be any hardware implementations.
>
> If we can narrow down who is going to use it perhaps we can then
> figure out the appropriate interface for this.
The use case I see goes along the lines of dm-crypt and Ext4 crypto, or
ecryptfs:
For the key wrapping they all do, I am thinking about suggesting KW as it has
one advantage no other cipher currently has: it is an authenticated decryption
where I still only need one symmetric key. Yes, KW is inefficient compared to
other ciphers, but for handling small data blobs, it should be just fine.
For example, dm-crypt: dm-crypt currently uses the same cipher used for the
bulk encryption to wrap the LUKS header. Obviously we miss the authentication
check of the data blob. So, we could use other authenticated schemas, like GCM
or authenc(). But they all need either two keys or AAD for which the common
mechanisms typically have no provisions. Therefore, KW is a drop-in
replacement for standard symmetric ciphers where one want authentication as
well.
--
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] crypto: add key wrapping block chaining mode
2015-04-28 2:35 ` Stephan Mueller
@ 2015-04-28 2:50 ` Herbert Xu
0 siblings, 0 replies; 20+ messages in thread
From: Herbert Xu @ 2015-04-28 2:50 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Tue, Apr 28, 2015 at 04:35:57AM +0200, Stephan Mueller wrote:
>
> In this case, shouldn't we just have a loop where:
>
> 1. from the given endpoint, we go a semiblock back
>
> 2. now we see how many bytes we get when fetching the SG list till the end,
>
> 3a. if answer from 2 is semiblock or larger -> fetch it and exit
>
> 3b. if answer from 2 is less than a semiblock, fetch the available data,
> advance to the next SGL and go to step 2 to try to fetch semiblock - obtained
> data.
The problem is that the SG list is not designed to be walked over
backwards. So you always have to start from the beginning and go
to the end, for every block. There is no easy way of saying give
me the next SG. You have to go back to the beginning and find it.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-04-28 2:45 ` Stephan Mueller
@ 2015-04-28 2:54 ` Herbert Xu
2015-04-28 2:58 ` Stephan Mueller
0 siblings, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-04-28 2:54 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Tue, Apr 28, 2015 at 04:45:17AM +0200, Stephan Mueller wrote:
>
> The use case I see goes along the lines of dm-crypt and Ext4 crypto, or
> ecryptfs:
>
> For the key wrapping they all do, I am thinking about suggesting KW as it has
> one advantage no other cipher currently has: it is an authenticated decryption
> where I still only need one symmetric key. Yes, KW is inefficient compared to
> other ciphers, but for handling small data blobs, it should be just fine.
>
> For example, dm-crypt: dm-crypt currently uses the same cipher used for the
> bulk encryption to wrap the LUKS header. Obviously we miss the authentication
> check of the data blob. So, we could use other authenticated schemas, like GCM
> or authenc(). But they all need either two keys or AAD for which the common
> mechanisms typically have no provisions. Therefore, KW is a drop-in
> replacement for standard symmetric ciphers where one want authentication as
> well.
If it's for cases where the data is always linear, we could always
do this outside the crypto API. You can still use AES from the crypto
API to do the actual crypto of course.
By keeping it out of the crypto API you wouldn't have to worry about
SG lists and can simply require the input to be linear u8 * buffers.
However, because this is an algorithm that is not otherwise useful
you'll need to ensure that at least one user is going to be accepted
into the kernel.
The implementation could go into lib.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-04-28 2:54 ` Herbert Xu
@ 2015-04-28 2:58 ` Stephan Mueller
2015-05-01 3:20 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-04-28 2:58 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Dienstag, 28. April 2015, 10:54:02 schrieb Herbert Xu:
Hi Herbert,
> On Tue, Apr 28, 2015 at 04:45:17AM +0200, Stephan Mueller wrote:
> > The use case I see goes along the lines of dm-crypt and Ext4 crypto, or
> > ecryptfs:
> >
> > For the key wrapping they all do, I am thinking about suggesting KW as it
> > has one advantage no other cipher currently has: it is an authenticated
> > decryption where I still only need one symmetric key. Yes, KW is
> > inefficient compared to other ciphers, but for handling small data blobs,
> > it should be just fine.
> >
> > For example, dm-crypt: dm-crypt currently uses the same cipher used for
> > the
> > bulk encryption to wrap the LUKS header. Obviously we miss the
> > authentication check of the data blob. So, we could use other
> > authenticated schemas, like GCM or authenc(). But they all need either
> > two keys or AAD for which the common mechanisms typically have no
> > provisions. Therefore, KW is a drop-in replacement for standard symmetric
> > ciphers where one want authentication as well.
>
> If it's for cases where the data is always linear, we could always
> do this outside the crypto API. You can still use AES from the crypto
> API to do the actual crypto of course.
>
> By keeping it out of the crypto API you wouldn't have to worry about
> SG lists and can simply require the input to be linear u8 * buffers.
>
> However, because this is an algorithm that is not otherwise useful
> you'll need to ensure that at least one user is going to be accepted
> into the kernel.
>
> The implementation could go into lib.
Hm, in case of dm-crypt, that is not really possible, because this is fully
driven by user space: libcryptsetup sets up a temporary dm-crypt container for
the LUKS header space. Then user space accesses the data it needs and re-
injects it into the kernel for the bulk encryption dm-crypt component.
--
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-04-28 2:58 ` Stephan Mueller
@ 2015-05-01 3:20 ` Herbert Xu
2015-05-01 7:27 ` Stephan Mueller
0 siblings, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-05-01 3:20 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Tue, Apr 28, 2015 at 04:58:31AM +0200, Stephan Mueller wrote:
>
> Hm, in case of dm-crypt, that is not really possible, because this is fully
> driven by user space: libcryptsetup sets up a temporary dm-crypt container for
> the LUKS header space. Then user space accesses the data it needs and re-
> injects it into the kernel for the bulk encryption dm-crypt component.
If both user-space and the kernel implements the same algorithm
correctly why wouldn't it work?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-05-01 3:20 ` Herbert Xu
@ 2015-05-01 7:27 ` Stephan Mueller
2015-05-01 7:30 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-05-01 7:27 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Freitag, 1. Mai 2015, 11:20:35 schrieb Herbert Xu:
Hi Herbert,
>On Tue, Apr 28, 2015 at 04:58:31AM +0200, Stephan Mueller wrote:
>> Hm, in case of dm-crypt, that is not really possible, because this is fully
>> driven by user space: libcryptsetup sets up a temporary dm-crypt container
>> for the LUKS header space. Then user space accesses the data it needs and
>> re- injects it into the kernel for the bulk encryption dm-crypt component.
>If both user-space and the kernel implements the same algorithm
>correctly why wouldn't it work?
User space does not use any ciphers to protect the key, that is the
interesting part. The LUKS header will be mapped by a dm-crypt mapping and
then read from user space to access the key. So, userspace does not en/decrypt
the data.
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-05-01 7:27 ` Stephan Mueller
@ 2015-05-01 7:30 ` Herbert Xu
2015-05-01 13:21 ` Stephan Mueller
0 siblings, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-05-01 7:30 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Fri, May 01, 2015 at 09:27:07AM +0200, Stephan Mueller wrote:
>
> User space does not use any ciphers to protect the key, that is the
> interesting part. The LUKS header will be mapped by a dm-crypt mapping and
> then read from user space to access the key. So, userspace does not en/decrypt
> the data.
So who is doing the encrypting/decrypting in this case?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-05-01 7:30 ` Herbert Xu
@ 2015-05-01 13:21 ` Stephan Mueller
2015-05-11 9:42 ` Herbert Xu
0 siblings, 1 reply; 20+ messages in thread
From: Stephan Mueller @ 2015-05-01 13:21 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Freitag, 1. Mai 2015, 15:30:36 schrieb Herbert Xu:
Hi Herbert,
>
>So who is doing the encrypting/decrypting in this case?
The steps from entering the password until having the full dm-crypt partition
mounted are, assuming that in my example, we use AES256-CBC as cipher:
1. libcryptsetup: asks for the user's password
2. libcryptsetup/libgcrypt perform PBKDF to obtain key P
3. libcryptsetup: create a dm-crypt mapping of the LUKS header with AES256-
CBC(P)
4. libcryptsetup: mount the dm-crypt mapping and read out the master volume
key M
4a. kernel: perform en/decryption of LUKS header with AES256-CBC for the
read/write operations of libcryptsetup
5. libcryptsetup: unmount of dm-crypt mapping
6. libcryptsetup: destroy dm-crypt mapping and forget P
7. libcryptsetup: create dm-crypt mapping of the disk encryption container
holding the user data using AES256-CBC(M) -- this starts at the offset where
the LUKS header ends
8. somebody calls mount to mount the created dm-crypt mapping
9: kernel: perform AES256-CBC operation for subsequent operations on mounted
dm-crypt mapping
My idea would be to use keywrap in step 3.
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-05-01 13:21 ` Stephan Mueller
@ 2015-05-11 9:42 ` Herbert Xu
2015-05-11 10:15 ` Stephan Mueller
0 siblings, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2015-05-11 9:42 UTC (permalink / raw)
To: Stephan Mueller; +Cc: linux-crypto
On Fri, May 01, 2015 at 03:21:19PM +0200, Stephan Mueller wrote:
>
> My idea would be to use keywrap in step 3.
How is dm-crypt going to cope with the increase in ciphertext size?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2] SP800-38F / RFC3394 key wrapping
2015-05-11 9:42 ` Herbert Xu
@ 2015-05-11 10:15 ` Stephan Mueller
0 siblings, 0 replies; 20+ messages in thread
From: Stephan Mueller @ 2015-05-11 10:15 UTC (permalink / raw)
To: Herbert Xu; +Cc: linux-crypto
Am Montag, 11. Mai 2015, 17:42:12 schrieb Herbert Xu:
Hi Herbert,
>On Fri, May 01, 2015 at 03:21:19PM +0200, Stephan Mueller wrote:
>> My idea would be to use keywrap in step 3.
>
>How is dm-crypt going to cope with the increase in ciphertext size?
The LUKS header is not fixed-size, so it would be able to handle the increased
cipher text size.
But I think I should rather go back and write up the ideas that I have for key
handling. Currently it seems that too many components in kernel and user space
handle plaintext keys.
After writing that one up, I like to present it with an assoicated discussion
of how to handle key wrapping considering that addition of it to the kernel
crypto API is not possible at this point.
Thanks for your help.
Ciao
Stephan
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2015-05-11 10:16 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-04-25 22:07 [PATCH v2] SP800-38F / RFC3394 key wrapping Stephan Mueller
2015-04-25 22:08 ` [PATCH v2] crypto: add key wrapping block chaining mode Stephan Mueller
2015-04-27 8:26 ` Herbert Xu
2015-04-27 14:34 ` Stephan Mueller
2015-04-28 1:10 ` Herbert Xu
2015-04-28 2:35 ` Stephan Mueller
2015-04-28 2:50 ` Herbert Xu
2015-04-27 8:29 ` Herbert Xu
2015-04-27 14:58 ` Stephan Mueller
2015-04-28 1:12 ` Herbert Xu
2015-04-28 1:09 ` [PATCH v2] SP800-38F / RFC3394 key wrapping Herbert Xu
2015-04-28 2:45 ` Stephan Mueller
2015-04-28 2:54 ` Herbert Xu
2015-04-28 2:58 ` Stephan Mueller
2015-05-01 3:20 ` Herbert Xu
2015-05-01 7:27 ` Stephan Mueller
2015-05-01 7:30 ` Herbert Xu
2015-05-01 13:21 ` Stephan Mueller
2015-05-11 9:42 ` Herbert Xu
2015-05-11 10:15 ` Stephan Mueller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).