* [RFC PATCH v2 0/2] dm-inlinecrypt: add target for inline block device encryption
@ 2024-10-16 23:27 Eric Biggers
2024-10-16 23:27 ` [RFC PATCH v2 1/2] block: export blk-crypto symbols required by dm-inlinecrypt Eric Biggers
2024-10-16 23:27 ` [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
0 siblings, 2 replies; 10+ messages in thread
From: Eric Biggers @ 2024-10-16 23:27 UTC (permalink / raw)
To: dm-devel
Cc: linux-block, linux-kernel, Md Sadre Alam, Israel Rukshin,
Milan Broz, Mikulas Patocka
This is yet another proposal for dm-inlinecrypt, this one resulting from
the conversation at
https://lore.kernel.org/linux-block/20240916085741.1636554-2-quic_mdalam@quicinc.com/T/#u.
This brings in the work that was already done in Android's
dm-default-key but drops the passthrough support, as it doesn't seem
like that will go anywhere upstream anytime soon. This makes the
proposal suitable as a replacement for dm-crypt only.
Compared to the other patch linked above, this patch addresses a large
number of issues, the main ones being mentioned at
https://lore.kernel.org/linux-block/20240921185519.GA2187@quark.localdomain/
Changed in v2:
- Split block exports into a separate patch.
- Return the key from STATUSTYPE_TABLE. This is a misfeature, but it's
needed to comply with the device-mapper UAPI.
- Don't pass uninitialized key to blk_crypto_evict_key().
- Use {} instead of {0}.
- Simplify inlinecrypt_prepare_ioctl().
Eric Biggers (2):
block: export blk-crypto symbols required by dm-inlinecrypt
dm-inlinecrypt: add target for inline block device encryption
block/blk-crypto.c | 3 +
drivers/md/Kconfig | 10 +
drivers/md/Makefile | 1 +
drivers/md/dm-inlinecrypt.c | 417 ++++++++++++++++++++++++++++++++++++
4 files changed, 431 insertions(+)
create mode 100644 drivers/md/dm-inlinecrypt.c
base-commit: c964ced7726294d40913f2127c3f185a92cb4a41
--
2.47.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [RFC PATCH v2 1/2] block: export blk-crypto symbols required by dm-inlinecrypt
2024-10-16 23:27 [RFC PATCH v2 0/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
@ 2024-10-16 23:27 ` Eric Biggers
2024-10-16 23:27 ` [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
1 sibling, 0 replies; 10+ messages in thread
From: Eric Biggers @ 2024-10-16 23:27 UTC (permalink / raw)
To: dm-devel
Cc: linux-block, linux-kernel, Md Sadre Alam, Israel Rukshin,
Milan Broz, Mikulas Patocka
From: Eric Biggers <ebiggers@google.com>
bio_crypt_set_ctx(), blk_crypto_init_key(), and
blk_crypto_start_using_key() are needed to use inline encryption; see
Documentation/block/inline-encryption.rst. Export them so that
dm-inlinecrypt can use them. The only reason these weren't exported
before was that inline encryption was previously used only by fs/crypto/
which is built-in code.
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
block/blk-crypto.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 4d760b092deb9..5a121b1292f9a 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -104,10 +104,11 @@ void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key,
bc->bc_key = key;
memcpy(bc->bc_dun, dun, sizeof(bc->bc_dun));
bio->bi_crypt_context = bc;
}
+EXPORT_SYMBOL_GPL(bio_crypt_set_ctx);
void __bio_crypt_free_ctx(struct bio *bio)
{
mempool_free(bio->bi_crypt_context, bio_crypt_ctx_pool);
bio->bi_crypt_context = NULL;
@@ -354,10 +355,11 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
blk_key->size = mode->keysize;
memcpy(blk_key->raw, raw_key, mode->keysize);
return 0;
}
+EXPORT_SYMBOL_GPL(blk_crypto_init_key);
bool blk_crypto_config_supported_natively(struct block_device *bdev,
const struct blk_crypto_config *cfg)
{
return __blk_crypto_cfg_supported(bdev_get_queue(bdev)->crypto_profile,
@@ -396,10 +398,11 @@ int blk_crypto_start_using_key(struct block_device *bdev,
{
if (blk_crypto_config_supported_natively(bdev, &key->crypto_cfg))
return 0;
return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode);
}
+EXPORT_SYMBOL_GPL(blk_crypto_start_using_key);
/**
* blk_crypto_evict_key() - Evict a blk_crypto_key from a block_device
* @bdev: a block_device on which I/O using the key may have been done
* @key: the key to evict
--
2.47.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-16 23:27 [RFC PATCH v2 0/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
2024-10-16 23:27 ` [RFC PATCH v2 1/2] block: export blk-crypto symbols required by dm-inlinecrypt Eric Biggers
@ 2024-10-16 23:27 ` Eric Biggers
2024-10-17 19:44 ` Eric Biggers
2024-10-18 2:52 ` Adrian Vovk
1 sibling, 2 replies; 10+ messages in thread
From: Eric Biggers @ 2024-10-16 23:27 UTC (permalink / raw)
To: dm-devel
Cc: linux-block, linux-kernel, Md Sadre Alam, Israel Rukshin,
Milan Broz, Mikulas Patocka
From: Eric Biggers <ebiggers@google.com>
Add a new device-mapper target "dm-inlinecrypt" that is similar to
dm-crypt but uses the blk-crypto API instead of the regular crypto API.
This allows it to take advantage of inline encryption hardware such as
that commonly built into UFS host controllers.
The table syntax matches dm-crypt's, but for now only a stripped-down
set of parameters is supported. For example, for now AES-256-XTS is the
only supported cipher.
dm-inlinecrypt is based on Android's dm-default-key with the
controversial passthrough support removed. Note that due to the removal
of passthrough support, use of dm-inlinecrypt in combination with
fscrypt causes double encryption of file contents (similar to dm-crypt +
fscrypt), with the fscrypt layer not being able to use the inline
encryption hardware. This makes dm-inlinecrypt unusable on systems such
as Android that use fscrypt and where a more optimized approach is
needed. It is however suitable as a replacement for dm-crypt.
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
drivers/md/Kconfig | 10 +
drivers/md/Makefile | 1 +
drivers/md/dm-inlinecrypt.c | 417 ++++++++++++++++++++++++++++++++++++
3 files changed, 428 insertions(+)
create mode 100644 drivers/md/dm-inlinecrypt.c
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index 1e9db8e4acdf6..caed60097a216 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -268,10 +268,20 @@ config DM_CRYPT
To compile this code as a module, choose M here: the module will
be called dm-crypt.
If unsure, say N.
+config DM_INLINECRYPT
+ tristate "Inline encryption target support"
+ depends on BLK_DEV_DM
+ depends on BLK_INLINE_ENCRYPTION
+ help
+ This device-mapper target is similar to dm-crypt, but it uses the
+ blk-crypto API instead of the regular crypto API. This allows it to
+ take advantage of inline encryption hardware such as that commonly
+ built into UFS host controllers.
+
config DM_SNAPSHOT
tristate "Snapshot target"
depends on BLK_DEV_DM
select DM_BUFIO
help
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index 476a214e4bdc2..6e44ff32b8af8 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -49,10 +49,11 @@ obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
obj-$(CONFIG_BLK_DEV_DM_BUILTIN) += dm-builtin.o
obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o
obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o
obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
+obj-$(CONFIG_DM_INLINECRYPT) += dm-inlinecrypt.o
obj-$(CONFIG_DM_DELAY) += dm-delay.o
obj-$(CONFIG_DM_DUST) += dm-dust.o
obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o
obj-$(CONFIG_DM_MULTIPATH_QL) += dm-queue-length.o
diff --git a/drivers/md/dm-inlinecrypt.c b/drivers/md/dm-inlinecrypt.c
new file mode 100644
index 0000000000000..d6c2e6e1fdbbb
--- /dev/null
+++ b/drivers/md/dm-inlinecrypt.c
@@ -0,0 +1,417 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright 2024 Google LLC
+ */
+
+#include <linux/blk-crypto.h>
+#include <linux/device-mapper.h>
+#include <linux/module.h>
+
+#define DM_MSG_PREFIX "inlinecrypt"
+
+static const struct dm_inlinecrypt_cipher {
+ const char *name;
+ enum blk_crypto_mode_num mode_num;
+ int key_size;
+} dm_inlinecrypt_ciphers[] = {
+ {
+ .name = "aes-xts-plain64",
+ .mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ .key_size = 64,
+ },
+};
+
+/**
+ * struct inlinecrypt_ctx - private data of an inlinecrypt target
+ * @dev: the underlying device
+ * @start: starting sector of the range of @dev which this target actually maps.
+ * For this purpose a "sector" is 512 bytes.
+ * @cipher_string: the name of the encryption algorithm being used
+ * @iv_offset: starting offset for IVs. IVs are generated as if the target were
+ * preceded by @iv_offset 512-byte sectors.
+ * @sector_size: crypto sector size in bytes (usually 4096)
+ * @sector_bits: log2(sector_size)
+ * @key: the encryption key to use
+ * @max_dun: the maximum DUN that may be used (computed from other params)
+ */
+struct inlinecrypt_ctx {
+ struct dm_dev *dev;
+ sector_t start;
+ const char *cipher_string;
+ u64 iv_offset;
+ unsigned int sector_size;
+ unsigned int sector_bits;
+ struct blk_crypto_key key;
+ u64 max_dun;
+};
+
+static const struct dm_inlinecrypt_cipher *
+lookup_cipher(const char *cipher_string)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dm_inlinecrypt_ciphers); i++) {
+ if (strcmp(cipher_string, dm_inlinecrypt_ciphers[i].name) == 0)
+ return &dm_inlinecrypt_ciphers[i];
+ }
+ return NULL;
+}
+
+static void inlinecrypt_dtr(struct dm_target *ti)
+{
+ struct inlinecrypt_ctx *ctx = ti->private;
+
+ if (ctx->dev) {
+ if (ctx->key.size)
+ blk_crypto_evict_key(ctx->dev->bdev, &ctx->key);
+ dm_put_device(ti, ctx->dev);
+ }
+ kfree_sensitive(ctx->cipher_string);
+ kfree_sensitive(ctx);
+}
+
+static int inlinecrypt_ctr_optional(struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ struct inlinecrypt_ctx *ctx = ti->private;
+ struct dm_arg_set as;
+ static const struct dm_arg _args[] = {
+ {0, 3, "Invalid number of feature args"},
+ };
+ unsigned int opt_params;
+ const char *opt_string;
+ bool iv_large_sectors = false;
+ char dummy;
+ int err;
+
+ as.argc = argc;
+ as.argv = argv;
+
+ err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
+ if (err)
+ return err;
+
+ while (opt_params--) {
+ opt_string = dm_shift_arg(&as);
+ if (!opt_string) {
+ ti->error = "Not enough feature arguments";
+ return -EINVAL;
+ }
+ if (!strcmp(opt_string, "allow_discards")) {
+ ti->num_discard_bios = 1;
+ } else if (sscanf(opt_string, "sector_size:%u%c",
+ &ctx->sector_size, &dummy) == 1) {
+ if (ctx->sector_size < SECTOR_SIZE ||
+ ctx->sector_size > 4096 ||
+ !is_power_of_2(ctx->sector_size)) {
+ ti->error = "Invalid sector_size";
+ return -EINVAL;
+ }
+ } else if (!strcmp(opt_string, "iv_large_sectors")) {
+ iv_large_sectors = true;
+ } else {
+ ti->error = "Invalid feature arguments";
+ return -EINVAL;
+ }
+ }
+
+ /* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
+ if (ctx->sector_size != SECTOR_SIZE && !iv_large_sectors) {
+ ti->error = "iv_large_sectors must be specified";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*
+ * Construct an inlinecrypt mapping:
+ * <cipher> <key> <iv_offset> <dev_path> <start>
+ *
+ * This syntax matches dm-crypt's, but the set of supported functionality has
+ * been stripped down.
+ */
+static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ struct inlinecrypt_ctx *ctx;
+ const struct dm_inlinecrypt_cipher *cipher;
+ u8 raw_key[BLK_CRYPTO_MAX_KEY_SIZE];
+ unsigned int dun_bytes;
+ unsigned long long tmpll;
+ char dummy;
+ int err;
+
+ if (argc < 5) {
+ ti->error = "Not enough arguments";
+ return -EINVAL;
+ }
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx) {
+ ti->error = "Out of memory";
+ return -ENOMEM;
+ }
+ ti->private = ctx;
+
+ /* <cipher> */
+ ctx->cipher_string = kstrdup(argv[0], GFP_KERNEL);
+ if (!ctx->cipher_string) {
+ ti->error = "Out of memory";
+ err = -ENOMEM;
+ goto bad;
+ }
+ cipher = lookup_cipher(ctx->cipher_string);
+ if (!cipher) {
+ ti->error = "Unsupported cipher";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ /* <key> */
+ if (strlen(argv[1]) != 2 * cipher->key_size) {
+ ti->error = "Incorrect key size for cipher";
+ err = -EINVAL;
+ goto bad;
+ }
+ if (hex2bin(raw_key, argv[1], cipher->key_size) != 0) {
+ ti->error = "Malformed key string";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ /* <iv_offset> */
+ if (sscanf(argv[2], "%llu%c", &ctx->iv_offset, &dummy) != 1) {
+ ti->error = "Invalid iv_offset sector";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ /* <dev_path> */
+ err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
+ &ctx->dev);
+ if (err) {
+ ti->error = "Device lookup failed";
+ goto bad;
+ }
+
+ /* <start> */
+ if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
+ tmpll != (sector_t)tmpll) {
+ ti->error = "Invalid start sector";
+ err = -EINVAL;
+ goto bad;
+ }
+ ctx->start = tmpll;
+
+ /* optional arguments */
+ ctx->sector_size = SECTOR_SIZE;
+ if (argc > 5) {
+ err = inlinecrypt_ctr_optional(ti, argc - 5, &argv[5]);
+ if (err)
+ goto bad;
+ }
+ ctx->sector_bits = ilog2(ctx->sector_size);
+ if (ti->len & ((ctx->sector_size >> SECTOR_SHIFT) - 1)) {
+ ti->error = "Device size is not a multiple of sector_size";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ ctx->max_dun = (ctx->iv_offset + ti->len - 1) >>
+ (ctx->sector_bits - SECTOR_SHIFT);
+ dun_bytes = DIV_ROUND_UP(fls64(ctx->max_dun), 8);
+
+ err = blk_crypto_init_key(&ctx->key, raw_key, cipher->mode_num,
+ dun_bytes, ctx->sector_size);
+ if (err) {
+ ti->error = "Error initializing blk-crypto key";
+ goto bad;
+ }
+
+ err = blk_crypto_start_using_key(ctx->dev->bdev, &ctx->key);
+ if (err) {
+ ti->error = "Error starting to use blk-crypto";
+ goto bad;
+ }
+
+ ti->num_flush_bios = 1;
+
+ err = 0;
+ goto out;
+
+bad:
+ inlinecrypt_dtr(ti);
+out:
+ memzero_explicit(raw_key, sizeof(raw_key));
+ return err;
+}
+
+static int inlinecrypt_map(struct dm_target *ti, struct bio *bio)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ sector_t sector_in_target;
+ u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = {};
+
+ bio_set_dev(bio, ctx->dev->bdev);
+
+ /*
+ * If the bio is a device-level request which doesn't target a specific
+ * sector, there's nothing more to do.
+ */
+ if (bio_sectors(bio) == 0)
+ return DM_MAPIO_REMAPPED;
+
+ /*
+ * The bio should never have an encryption context already, since
+ * dm-inlinecrypt doesn't pass through any inline encryption
+ * capabilities to the layer above it.
+ */
+ if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
+ return DM_MAPIO_KILL;
+
+ /* Map the bio's sector to the underlying device. (512-byte sectors) */
+ sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
+ bio->bi_iter.bi_sector = ctx->start + sector_in_target;
+
+ /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
+ dun[0] = ctx->iv_offset + sector_in_target; /* 512-byte sectors */
+ if (dun[0] & ((ctx->sector_size >> SECTOR_SHIFT) - 1))
+ return DM_MAPIO_KILL;
+ dun[0] >>= ctx->sector_bits - SECTOR_SHIFT; /* crypto sectors */
+
+ /*
+ * This check isn't necessary as we should have calculated max_dun
+ * correctly, but be safe.
+ */
+ if (WARN_ON_ONCE(dun[0] > ctx->max_dun))
+ return DM_MAPIO_KILL;
+
+ bio_crypt_set_ctx(bio, &ctx->key, dun, GFP_NOIO);
+
+ return DM_MAPIO_REMAPPED;
+}
+
+static void inlinecrypt_status(struct dm_target *ti, status_type_t type,
+ unsigned int status_flags, char *result,
+ unsigned int maxlen)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ unsigned int sz = 0;
+ int num_feature_args = 0;
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ case STATUSTYPE_IMA:
+ result[0] = '\0';
+ break;
+
+ case STATUSTYPE_TABLE:
+ /*
+ * Warning: like dm-crypt, dm-inlinecrypt includes the key in
+ * the returned table. Userspace is responsible for redacting
+ * the key when needed.
+ */
+ DMEMIT("%s %*phN %llu %s %llu", ctx->cipher_string,
+ ctx->key.size, ctx->key.raw, ctx->iv_offset,
+ ctx->dev->name, ctx->start);
+ num_feature_args += !!ti->num_discard_bios;
+ if (ctx->sector_size != SECTOR_SIZE)
+ num_feature_args += 2;
+ if (num_feature_args != 0) {
+ DMEMIT(" %d", num_feature_args);
+ if (ti->num_discard_bios)
+ DMEMIT(" allow_discards");
+ if (ctx->sector_size != SECTOR_SIZE) {
+ DMEMIT(" sector_size:%u", ctx->sector_size);
+ DMEMIT(" iv_large_sectors");
+ }
+ }
+ break;
+ }
+}
+
+static int inlinecrypt_prepare_ioctl(struct dm_target *ti,
+ struct block_device **bdev)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ const struct dm_dev *dev = ctx->dev;
+
+ *bdev = dev->bdev;
+
+ /* Only pass ioctls through if the device sizes match exactly. */
+ return ctx->start != 0 || ti->len != bdev_nr_sectors(dev->bdev);
+}
+
+static int inlinecrypt_iterate_devices(struct dm_target *ti,
+ iterate_devices_callout_fn fn,
+ void *data)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+
+ return fn(ti, ctx->dev, ctx->start, ti->len, data);
+}
+
+#ifdef CONFIG_BLK_DEV_ZONED
+static int inlinecrypt_report_zones(struct dm_target *ti,
+ struct dm_report_zones_args *args,
+ unsigned int nr_zones)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+
+ return dm_report_zones(ctx->dev->bdev, ctx->start,
+ ctx->start + dm_target_offset(ti, args->next_sector),
+ args, nr_zones);
+}
+#else
+#define inlinecrypt_report_zones NULL
+#endif
+
+static void inlinecrypt_io_hints(struct dm_target *ti,
+ struct queue_limits *limits)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ const unsigned int sector_size = ctx->sector_size;
+
+ limits->logical_block_size =
+ max_t(unsigned int, limits->logical_block_size, sector_size);
+ limits->physical_block_size =
+ max_t(unsigned int, limits->physical_block_size, sector_size);
+ limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
+ limits->dma_alignment = limits->logical_block_size - 1;
+}
+
+static struct target_type inlinecrypt_target = {
+ .name = "inlinecrypt",
+ .version = {1, 0, 0},
+ /*
+ * Do not set DM_TARGET_PASSES_CRYPTO, since dm-inlinecrypt consumes the
+ * crypto capability itself.
+ */
+ .features = DM_TARGET_ZONED_HM,
+ .module = THIS_MODULE,
+ .ctr = inlinecrypt_ctr,
+ .dtr = inlinecrypt_dtr,
+ .map = inlinecrypt_map,
+ .status = inlinecrypt_status,
+ .prepare_ioctl = inlinecrypt_prepare_ioctl,
+ .iterate_devices = inlinecrypt_iterate_devices,
+ .report_zones = inlinecrypt_report_zones,
+ .io_hints = inlinecrypt_io_hints,
+};
+
+static int __init dm_inlinecrypt_init(void)
+{
+ return dm_register_target(&inlinecrypt_target);
+}
+
+static void __exit dm_inlinecrypt_exit(void)
+{
+ dm_unregister_target(&inlinecrypt_target);
+}
+
+module_init(dm_inlinecrypt_init);
+module_exit(dm_inlinecrypt_exit);
+
+MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+MODULE_DESCRIPTION(DM_NAME " target for inline encryption");
+MODULE_LICENSE("GPL");
--
2.47.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-16 23:27 ` [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
@ 2024-10-17 19:44 ` Eric Biggers
2024-10-17 20:17 ` Milan Broz
2024-10-18 2:52 ` Adrian Vovk
1 sibling, 1 reply; 10+ messages in thread
From: Eric Biggers @ 2024-10-17 19:44 UTC (permalink / raw)
To: dm-devel
Cc: linux-block, linux-kernel, Md Sadre Alam, Israel Rukshin,
Milan Broz, Mikulas Patocka
On Wed, Oct 16, 2024 at 04:27:48PM -0700, Eric Biggers wrote:
> Add a new device-mapper target "dm-inlinecrypt" that is similar to
> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
> This allows it to take advantage of inline encryption hardware such as
> that commonly built into UFS host controllers.
A slight difference in behavior vs. dm-crypt that I just became aware of:
dm-crypt allows XTS keys whose first half equals the second half, i.e.
cipher key == tweak key. dm-inlinecrypt typically will not allow this. Inline
encryption hardware typically rejects such keys, and blk-crypto-fallback rejects
them too because it uses CRYPTO_TFM_REQ_FORBID_WEAK_KEYS.
IMO, rejecting these weak keys is desirable, and the fact that dm-inlinecrypt
fixes this issue with dm-crypt will just need to be documented.
- Eric
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-17 19:44 ` Eric Biggers
@ 2024-10-17 20:17 ` Milan Broz
2024-10-17 20:28 ` Eric Biggers
0 siblings, 1 reply; 10+ messages in thread
From: Milan Broz @ 2024-10-17 20:17 UTC (permalink / raw)
To: Eric Biggers, dm-devel
Cc: linux-block, linux-kernel, Md Sadre Alam, Israel Rukshin,
Mikulas Patocka
On 10/17/24 9:44 PM, Eric Biggers wrote:
> On Wed, Oct 16, 2024 at 04:27:48PM -0700, Eric Biggers wrote:
>> Add a new device-mapper target "dm-inlinecrypt" that is similar to
>> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
>> This allows it to take advantage of inline encryption hardware such as
>> that commonly built into UFS host controllers.
>
> A slight difference in behavior vs. dm-crypt that I just became aware of:
> dm-crypt allows XTS keys whose first half equals the second half, i.e.
> cipher key == tweak key. dm-inlinecrypt typically will not allow this. Inline
> encryption hardware typically rejects such keys, and blk-crypto-fallback rejects
> them too because it uses CRYPTO_TFM_REQ_FORBID_WEAK_KEYS.
>
> IMO, rejecting these weak keys is desirable, and the fact that dm-inlinecrypt
> fixes this issue with dm-crypt will just need to be documented.
Hm, I thought this is already rejected in crypto API (at least in FIPS mode)...
It should be rejected exactly as you described even for dm-crypt,
just the check should be (IMO) part of crypto API (set keys), not dm-crypt itself.
And here I think we should not be backward "compatible" as it is security issue,
both XTS keys just must not be the same.
Milan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-17 20:17 ` Milan Broz
@ 2024-10-17 20:28 ` Eric Biggers
2024-10-17 21:03 ` Milan Broz
0 siblings, 1 reply; 10+ messages in thread
From: Eric Biggers @ 2024-10-17 20:28 UTC (permalink / raw)
To: Milan Broz
Cc: dm-devel, linux-block, linux-kernel, Md Sadre Alam,
Israel Rukshin, Mikulas Patocka
On Thu, Oct 17, 2024 at 10:17:04PM +0200, Milan Broz wrote:
> On 10/17/24 9:44 PM, Eric Biggers wrote:
> > On Wed, Oct 16, 2024 at 04:27:48PM -0700, Eric Biggers wrote:
> > > Add a new device-mapper target "dm-inlinecrypt" that is similar to
> > > dm-crypt but uses the blk-crypto API instead of the regular crypto API.
> > > This allows it to take advantage of inline encryption hardware such as
> > > that commonly built into UFS host controllers.
> >
> > A slight difference in behavior vs. dm-crypt that I just became aware of:
> > dm-crypt allows XTS keys whose first half equals the second half, i.e.
> > cipher key == tweak key. dm-inlinecrypt typically will not allow this. Inline
> > encryption hardware typically rejects such keys, and blk-crypto-fallback rejects
> > them too because it uses CRYPTO_TFM_REQ_FORBID_WEAK_KEYS.
> >
> > IMO, rejecting these weak keys is desirable, and the fact that dm-inlinecrypt
> > fixes this issue with dm-crypt will just need to be documented.
>
> Hm, I thought this is already rejected in crypto API (at least in FIPS mode)...
>
> It should be rejected exactly as you described even for dm-crypt,
> just the check should be (IMO) part of crypto API (set keys), not dm-crypt itself.
>
> And here I think we should not be backward "compatible" as it is security issue,
> both XTS keys just must not be the same.
>
In "FIPS mode" such keys are always rejected, but otherwise it is opt-in via the
flag CRYPTO_TFM_REQ_FORBID_WEAK_KEYS. dm-crypt doesn't use that flag.
We could certainly try to fix that in dm-crypt, though I expect that some
dm-crypt users have started relying on such keys. It is a common misconception
that XTS is secure when the two halves of the key are the same.
- Eric
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-17 20:28 ` Eric Biggers
@ 2024-10-17 21:03 ` Milan Broz
0 siblings, 0 replies; 10+ messages in thread
From: Milan Broz @ 2024-10-17 21:03 UTC (permalink / raw)
To: Eric Biggers
Cc: dm-devel, linux-block, linux-kernel, Md Sadre Alam,
Israel Rukshin, Mikulas Patocka
On 10/17/24 10:28 PM, Eric Biggers wrote:
> On Thu, Oct 17, 2024 at 10:17:04PM +0200, Milan Broz wrote:
>> On 10/17/24 9:44 PM, Eric Biggers wrote:
>>> On Wed, Oct 16, 2024 at 04:27:48PM -0700, Eric Biggers wrote:
>>>> Add a new device-mapper target "dm-inlinecrypt" that is similar to
>>>> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
>>>> This allows it to take advantage of inline encryption hardware such as
>>>> that commonly built into UFS host controllers.
>>>
>>> A slight difference in behavior vs. dm-crypt that I just became aware of:
>>> dm-crypt allows XTS keys whose first half equals the second half, i.e.
>>> cipher key == tweak key. dm-inlinecrypt typically will not allow this. Inline
>>> encryption hardware typically rejects such keys, and blk-crypto-fallback rejects
>>> them too because it uses CRYPTO_TFM_REQ_FORBID_WEAK_KEYS.
>>>
>>> IMO, rejecting these weak keys is desirable, and the fact that dm-inlinecrypt
>>> fixes this issue with dm-crypt will just need to be documented.
>>
>> Hm, I thought this is already rejected in crypto API (at least in FIPS mode)...
>>
>> It should be rejected exactly as you described even for dm-crypt,
>> just the check should be (IMO) part of crypto API (set keys), not dm-crypt itself.
>>
>> And here I think we should not be backward "compatible" as it is security issue,
>> both XTS keys just must not be the same.
>>
>
> In "FIPS mode" such keys are always rejected, but otherwise it is opt-in via the
> flag CRYPTO_TFM_REQ_FORBID_WEAK_KEYS. dm-crypt doesn't use that flag.
>
> We could certainly try to fix that in dm-crypt, though I expect that some
> dm-crypt users have started relying on such keys. It is a common misconception
> that XTS is secure when the two halves of the key are the same.
Ah, ok, I missed that weak keys flag.
We never did that in cryptsetup (including LUKS and plain with hashed passwords),
with the exception for benchmark (where it it was not a real key, just all zeroes,
and was fixed years ago as it did not work in FIPS) -- and obviously the case when
user set the key explicitly.
The same check is now in VeraCrypt (that uses dm-crypt on Linux).
I know about several broken HW crypto, but actually no dm-crypt user.
IMO we should set CRYPTO_TFM_REQ_FORBID_WEAK_KEYS by default. We can always introduce
flag to disable it, but I would really like to know if there are any real users....
Milan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-16 23:27 ` [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
2024-10-17 19:44 ` Eric Biggers
@ 2024-10-18 2:52 ` Adrian Vovk
2024-10-18 17:33 ` Eric Biggers
2025-07-11 5:30 ` Md Sadre Alam
1 sibling, 2 replies; 10+ messages in thread
From: Adrian Vovk @ 2024-10-18 2:52 UTC (permalink / raw)
To: Eric Biggers, dm-devel
Cc: linux-block, linux-kernel, Md Sadre Alam, Israel Rukshin,
Milan Broz, Mikulas Patocka
Hello,
On 10/16/24 19:27, Eric Biggers wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> Add a new device-mapper target "dm-inlinecrypt" that is similar to
> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
> This allows it to take advantage of inline encryption hardware such as
> that commonly built into UFS host controllers.
Wow! Thank you for this patch! This is something we really want in
systemd and in GNOME, and I came across this patchset on accident while
trying to find a way to get someone to work on it for us.
> The table syntax matches dm-crypt's, but for now only a stripped-down
> set of parameters is supported. For example, for now AES-256-XTS is the
> only supported cipher.
>
> dm-inlinecrypt is based on Android's dm-default-key with the
> controversial passthrough support removed.
That's quite unfortunate. I'd say this passthrough support is probably
the most powerful thing about dm-default-key.
Could you elaborate on why you removed it? I think it's a necessary
feature. Enabling multiple layers of encryption stacked on top of each
other, without paying the cost of double (or more...) encryption, is
something we really want over in userspace. It'll enable us to stack
full-disk encryption with encrypted /home directories for each user and
then stack encrypted per-app data dirs on top of that. I'd say that the
passthrough support is more important for us than the performance
benefit of using inline encryption hardware.
Without a solution to layer encryption we can't do a lot of good things
we want to do in userspace.
As far as I understand, the passthrough support was controversial only
because previous upstreaming attempts tried to punch holes into dm-crypt
to make this work. I don't know of an attempt to upstream dm-default-key
as-is, with passthrough support. As far as I can tell, people even
seemed open to that idea...
Is there some context I'm missing? Information would be appreciated.
> Note that due to the removal
> of passthrough support, use of dm-inlinecrypt in combination with
> fscrypt causes double encryption of file contents (similar to dm-crypt +
> fscrypt), with the fscrypt layer not being able to use the inline
> encryption hardware. This makes dm-inlinecrypt unusable on systems such
> as Android that use fscrypt and where a more optimized approach is
> needed.
Yeah, sadly you've removed the use-case we're very interested in.
Generic PC hardware doesn't have blk-crypto hardware as far as I can
tell, so we don't get the performance benefit of replacing dm-crypt with
dm-inlinecrypt. However, we do get the performance benefit of the
passthrough support (if we run this on top of the blk-crypto software
emulation mode, for instance)
Best,
Adrian Vovk
> It is however suitable as a replacement for dm-crypt.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---
> drivers/md/Kconfig | 10 +
> drivers/md/Makefile | 1 +
> drivers/md/dm-inlinecrypt.c | 417 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 428 insertions(+)
> create mode 100644 drivers/md/dm-inlinecrypt.c
>
> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
> index 1e9db8e4acdf6..caed60097a216 100644
> --- a/drivers/md/Kconfig
> +++ b/drivers/md/Kconfig
> @@ -268,10 +268,20 @@ config DM_CRYPT
> To compile this code as a module, choose M here: the module will
> be called dm-crypt.
>
> If unsure, say N.
>
> +config DM_INLINECRYPT
> + tristate "Inline encryption target support"
> + depends on BLK_DEV_DM
> + depends on BLK_INLINE_ENCRYPTION
> + help
> + This device-mapper target is similar to dm-crypt, but it uses the
> + blk-crypto API instead of the regular crypto API. This allows it to
> + take advantage of inline encryption hardware such as that commonly
> + built into UFS host controllers.
> +
> config DM_SNAPSHOT
> tristate "Snapshot target"
> depends on BLK_DEV_DM
> select DM_BUFIO
> help
> diff --git a/drivers/md/Makefile b/drivers/md/Makefile
> index 476a214e4bdc2..6e44ff32b8af8 100644
> --- a/drivers/md/Makefile
> +++ b/drivers/md/Makefile
> @@ -49,10 +49,11 @@ obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
> obj-$(CONFIG_BLK_DEV_DM_BUILTIN) += dm-builtin.o
> obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o
> obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
> obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o
> obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
> +obj-$(CONFIG_DM_INLINECRYPT) += dm-inlinecrypt.o
> obj-$(CONFIG_DM_DELAY) += dm-delay.o
> obj-$(CONFIG_DM_DUST) += dm-dust.o
> obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
> obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o
> obj-$(CONFIG_DM_MULTIPATH_QL) += dm-queue-length.o
> diff --git a/drivers/md/dm-inlinecrypt.c b/drivers/md/dm-inlinecrypt.c
> new file mode 100644
> index 0000000000000..d6c2e6e1fdbbb
> --- /dev/null
> +++ b/drivers/md/dm-inlinecrypt.c
> @@ -0,0 +1,417 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright 2024 Google LLC
> + */
> +
> +#include <linux/blk-crypto.h>
> +#include <linux/device-mapper.h>
> +#include <linux/module.h>
> +
> +#define DM_MSG_PREFIX "inlinecrypt"
> +
> +static const struct dm_inlinecrypt_cipher {
> + const char *name;
> + enum blk_crypto_mode_num mode_num;
> + int key_size;
> +} dm_inlinecrypt_ciphers[] = {
> + {
> + .name = "aes-xts-plain64",
> + .mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
> + .key_size = 64,
> + },
> +};
> +
> +/**
> + * struct inlinecrypt_ctx - private data of an inlinecrypt target
> + * @dev: the underlying device
> + * @start: starting sector of the range of @dev which this target actually maps.
> + * For this purpose a "sector" is 512 bytes.
> + * @cipher_string: the name of the encryption algorithm being used
> + * @iv_offset: starting offset for IVs. IVs are generated as if the target were
> + * preceded by @iv_offset 512-byte sectors.
> + * @sector_size: crypto sector size in bytes (usually 4096)
> + * @sector_bits: log2(sector_size)
> + * @key: the encryption key to use
> + * @max_dun: the maximum DUN that may be used (computed from other params)
> + */
> +struct inlinecrypt_ctx {
> + struct dm_dev *dev;
> + sector_t start;
> + const char *cipher_string;
> + u64 iv_offset;
> + unsigned int sector_size;
> + unsigned int sector_bits;
> + struct blk_crypto_key key;
> + u64 max_dun;
> +};
> +
> +static const struct dm_inlinecrypt_cipher *
> +lookup_cipher(const char *cipher_string)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dm_inlinecrypt_ciphers); i++) {
> + if (strcmp(cipher_string, dm_inlinecrypt_ciphers[i].name) == 0)
> + return &dm_inlinecrypt_ciphers[i];
> + }
> + return NULL;
> +}
> +
> +static void inlinecrypt_dtr(struct dm_target *ti)
> +{
> + struct inlinecrypt_ctx *ctx = ti->private;
> +
> + if (ctx->dev) {
> + if (ctx->key.size)
> + blk_crypto_evict_key(ctx->dev->bdev, &ctx->key);
> + dm_put_device(ti, ctx->dev);
> + }
> + kfree_sensitive(ctx->cipher_string);
> + kfree_sensitive(ctx);
> +}
> +
> +static int inlinecrypt_ctr_optional(struct dm_target *ti,
> + unsigned int argc, char **argv)
> +{
> + struct inlinecrypt_ctx *ctx = ti->private;
> + struct dm_arg_set as;
> + static const struct dm_arg _args[] = {
> + {0, 3, "Invalid number of feature args"},
> + };
> + unsigned int opt_params;
> + const char *opt_string;
> + bool iv_large_sectors = false;
> + char dummy;
> + int err;
> +
> + as.argc = argc;
> + as.argv = argv;
> +
> + err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
> + if (err)
> + return err;
> +
> + while (opt_params--) {
> + opt_string = dm_shift_arg(&as);
> + if (!opt_string) {
> + ti->error = "Not enough feature arguments";
> + return -EINVAL;
> + }
> + if (!strcmp(opt_string, "allow_discards")) {
> + ti->num_discard_bios = 1;
> + } else if (sscanf(opt_string, "sector_size:%u%c",
> + &ctx->sector_size, &dummy) == 1) {
> + if (ctx->sector_size < SECTOR_SIZE ||
> + ctx->sector_size > 4096 ||
> + !is_power_of_2(ctx->sector_size)) {
> + ti->error = "Invalid sector_size";
> + return -EINVAL;
> + }
> + } else if (!strcmp(opt_string, "iv_large_sectors")) {
> + iv_large_sectors = true;
> + } else {
> + ti->error = "Invalid feature arguments";
> + return -EINVAL;
> + }
> + }
> +
> + /* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
> + if (ctx->sector_size != SECTOR_SIZE && !iv_large_sectors) {
> + ti->error = "iv_large_sectors must be specified";
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +/*
> + * Construct an inlinecrypt mapping:
> + * <cipher> <key> <iv_offset> <dev_path> <start>
> + *
> + * This syntax matches dm-crypt's, but the set of supported functionality has
> + * been stripped down.
> + */
> +static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
> +{
> + struct inlinecrypt_ctx *ctx;
> + const struct dm_inlinecrypt_cipher *cipher;
> + u8 raw_key[BLK_CRYPTO_MAX_KEY_SIZE];
> + unsigned int dun_bytes;
> + unsigned long long tmpll;
> + char dummy;
> + int err;
> +
> + if (argc < 5) {
> + ti->error = "Not enough arguments";
> + return -EINVAL;
> + }
> +
> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> + if (!ctx) {
> + ti->error = "Out of memory";
> + return -ENOMEM;
> + }
> + ti->private = ctx;
> +
> + /* <cipher> */
> + ctx->cipher_string = kstrdup(argv[0], GFP_KERNEL);
> + if (!ctx->cipher_string) {
> + ti->error = "Out of memory";
> + err = -ENOMEM;
> + goto bad;
> + }
> + cipher = lookup_cipher(ctx->cipher_string);
> + if (!cipher) {
> + ti->error = "Unsupported cipher";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + /* <key> */
> + if (strlen(argv[1]) != 2 * cipher->key_size) {
> + ti->error = "Incorrect key size for cipher";
> + err = -EINVAL;
> + goto bad;
> + }
> + if (hex2bin(raw_key, argv[1], cipher->key_size) != 0) {
> + ti->error = "Malformed key string";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + /* <iv_offset> */
> + if (sscanf(argv[2], "%llu%c", &ctx->iv_offset, &dummy) != 1) {
> + ti->error = "Invalid iv_offset sector";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + /* <dev_path> */
> + err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
> + &ctx->dev);
> + if (err) {
> + ti->error = "Device lookup failed";
> + goto bad;
> + }
> +
> + /* <start> */
> + if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
> + tmpll != (sector_t)tmpll) {
> + ti->error = "Invalid start sector";
> + err = -EINVAL;
> + goto bad;
> + }
> + ctx->start = tmpll;
> +
> + /* optional arguments */
> + ctx->sector_size = SECTOR_SIZE;
> + if (argc > 5) {
> + err = inlinecrypt_ctr_optional(ti, argc - 5, &argv[5]);
> + if (err)
> + goto bad;
> + }
> + ctx->sector_bits = ilog2(ctx->sector_size);
> + if (ti->len & ((ctx->sector_size >> SECTOR_SHIFT) - 1)) {
> + ti->error = "Device size is not a multiple of sector_size";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + ctx->max_dun = (ctx->iv_offset + ti->len - 1) >>
> + (ctx->sector_bits - SECTOR_SHIFT);
> + dun_bytes = DIV_ROUND_UP(fls64(ctx->max_dun), 8);
> +
> + err = blk_crypto_init_key(&ctx->key, raw_key, cipher->mode_num,
> + dun_bytes, ctx->sector_size);
> + if (err) {
> + ti->error = "Error initializing blk-crypto key";
> + goto bad;
> + }
> +
> + err = blk_crypto_start_using_key(ctx->dev->bdev, &ctx->key);
> + if (err) {
> + ti->error = "Error starting to use blk-crypto";
> + goto bad;
> + }
> +
> + ti->num_flush_bios = 1;
> +
> + err = 0;
> + goto out;
> +
> +bad:
> + inlinecrypt_dtr(ti);
> +out:
> + memzero_explicit(raw_key, sizeof(raw_key));
> + return err;
> +}
> +
> +static int inlinecrypt_map(struct dm_target *ti, struct bio *bio)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + sector_t sector_in_target;
> + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = {};
> +
> + bio_set_dev(bio, ctx->dev->bdev);
> +
> + /*
> + * If the bio is a device-level request which doesn't target a specific
> + * sector, there's nothing more to do.
> + */
> + if (bio_sectors(bio) == 0)
> + return DM_MAPIO_REMAPPED;
> +
> + /*
> + * The bio should never have an encryption context already, since
> + * dm-inlinecrypt doesn't pass through any inline encryption
> + * capabilities to the layer above it.
> + */
> + if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
> + return DM_MAPIO_KILL;
> +
> + /* Map the bio's sector to the underlying device. (512-byte sectors) */
> + sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
> + bio->bi_iter.bi_sector = ctx->start + sector_in_target;
> +
> + /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
> + dun[0] = ctx->iv_offset + sector_in_target; /* 512-byte sectors */
> + if (dun[0] & ((ctx->sector_size >> SECTOR_SHIFT) - 1))
> + return DM_MAPIO_KILL;
> + dun[0] >>= ctx->sector_bits - SECTOR_SHIFT; /* crypto sectors */
> +
> + /*
> + * This check isn't necessary as we should have calculated max_dun
> + * correctly, but be safe.
> + */
> + if (WARN_ON_ONCE(dun[0] > ctx->max_dun))
> + return DM_MAPIO_KILL;
> +
> + bio_crypt_set_ctx(bio, &ctx->key, dun, GFP_NOIO);
> +
> + return DM_MAPIO_REMAPPED;
> +}
> +
> +static void inlinecrypt_status(struct dm_target *ti, status_type_t type,
> + unsigned int status_flags, char *result,
> + unsigned int maxlen)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + unsigned int sz = 0;
> + int num_feature_args = 0;
> +
> + switch (type) {
> + case STATUSTYPE_INFO:
> + case STATUSTYPE_IMA:
> + result[0] = '\0';
> + break;
> +
> + case STATUSTYPE_TABLE:
> + /*
> + * Warning: like dm-crypt, dm-inlinecrypt includes the key in
> + * the returned table. Userspace is responsible for redacting
> + * the key when needed.
> + */
> + DMEMIT("%s %*phN %llu %s %llu", ctx->cipher_string,
> + ctx->key.size, ctx->key.raw, ctx->iv_offset,
> + ctx->dev->name, ctx->start);
> + num_feature_args += !!ti->num_discard_bios;
> + if (ctx->sector_size != SECTOR_SIZE)
> + num_feature_args += 2;
> + if (num_feature_args != 0) {
> + DMEMIT(" %d", num_feature_args);
> + if (ti->num_discard_bios)
> + DMEMIT(" allow_discards");
> + if (ctx->sector_size != SECTOR_SIZE) {
> + DMEMIT(" sector_size:%u", ctx->sector_size);
> + DMEMIT(" iv_large_sectors");
> + }
> + }
> + break;
> + }
> +}
> +
> +static int inlinecrypt_prepare_ioctl(struct dm_target *ti,
> + struct block_device **bdev)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + const struct dm_dev *dev = ctx->dev;
> +
> + *bdev = dev->bdev;
> +
> + /* Only pass ioctls through if the device sizes match exactly. */
> + return ctx->start != 0 || ti->len != bdev_nr_sectors(dev->bdev);
> +}
> +
> +static int inlinecrypt_iterate_devices(struct dm_target *ti,
> + iterate_devices_callout_fn fn,
> + void *data)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> +
> + return fn(ti, ctx->dev, ctx->start, ti->len, data);
> +}
> +
> +#ifdef CONFIG_BLK_DEV_ZONED
> +static int inlinecrypt_report_zones(struct dm_target *ti,
> + struct dm_report_zones_args *args,
> + unsigned int nr_zones)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> +
> + return dm_report_zones(ctx->dev->bdev, ctx->start,
> + ctx->start + dm_target_offset(ti, args->next_sector),
> + args, nr_zones);
> +}
> +#else
> +#define inlinecrypt_report_zones NULL
> +#endif
> +
> +static void inlinecrypt_io_hints(struct dm_target *ti,
> + struct queue_limits *limits)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + const unsigned int sector_size = ctx->sector_size;
> +
> + limits->logical_block_size =
> + max_t(unsigned int, limits->logical_block_size, sector_size);
> + limits->physical_block_size =
> + max_t(unsigned int, limits->physical_block_size, sector_size);
> + limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
> + limits->dma_alignment = limits->logical_block_size - 1;
> +}
> +
> +static struct target_type inlinecrypt_target = {
> + .name = "inlinecrypt",
> + .version = {1, 0, 0},
> + /*
> + * Do not set DM_TARGET_PASSES_CRYPTO, since dm-inlinecrypt consumes the
> + * crypto capability itself.
> + */
> + .features = DM_TARGET_ZONED_HM,
> + .module = THIS_MODULE,
> + .ctr = inlinecrypt_ctr,
> + .dtr = inlinecrypt_dtr,
> + .map = inlinecrypt_map,
> + .status = inlinecrypt_status,
> + .prepare_ioctl = inlinecrypt_prepare_ioctl,
> + .iterate_devices = inlinecrypt_iterate_devices,
> + .report_zones = inlinecrypt_report_zones,
> + .io_hints = inlinecrypt_io_hints,
> +};
> +
> +static int __init dm_inlinecrypt_init(void)
> +{
> + return dm_register_target(&inlinecrypt_target);
> +}
> +
> +static void __exit dm_inlinecrypt_exit(void)
> +{
> + dm_unregister_target(&inlinecrypt_target);
> +}
> +
> +module_init(dm_inlinecrypt_init);
> +module_exit(dm_inlinecrypt_exit);
> +
> +MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
> +MODULE_DESCRIPTION(DM_NAME " target for inline encryption");
> +MODULE_LICENSE("GPL");
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-18 2:52 ` Adrian Vovk
@ 2024-10-18 17:33 ` Eric Biggers
2025-07-11 5:30 ` Md Sadre Alam
1 sibling, 0 replies; 10+ messages in thread
From: Eric Biggers @ 2024-10-18 17:33 UTC (permalink / raw)
To: Adrian Vovk
Cc: dm-devel, linux-block, linux-kernel, Md Sadre Alam,
Israel Rukshin, Milan Broz, Mikulas Patocka
On Thu, Oct 17, 2024 at 10:52:42PM -0400, Adrian Vovk wrote:
> Hello,
>
> On 10/16/24 19:27, Eric Biggers wrote:
> > From: Eric Biggers <ebiggers@google.com>
> >
> > Add a new device-mapper target "dm-inlinecrypt" that is similar to
> > dm-crypt but uses the blk-crypto API instead of the regular crypto API.
> > This allows it to take advantage of inline encryption hardware such as
> > that commonly built into UFS host controllers.
>
> Wow! Thank you for this patch! This is something we really want in systemd
> and in GNOME, and I came across this patchset on accident while trying to
> find a way to get someone to work on it for us.
>
> > The table syntax matches dm-crypt's, but for now only a stripped-down
> > set of parameters is supported. For example, for now AES-256-XTS is the
> > only supported cipher.
> >
> > dm-inlinecrypt is based on Android's dm-default-key with the
> > controversial passthrough support removed.
>
> That's quite unfortunate. I'd say this passthrough support is probably the
> most powerful thing about dm-default-key.
>
> Could you elaborate on why you removed it?
Well, see the conversation you've been having with Christoph. It seems the
passthrough / "metadata encryption" support will need to be reworked into a
filesystem native feature in order to have a chance at upstream.
Meanwhile there are people who have been waiting for basically "dm-crypt with
blk-crypto", without the passthrough support. I hoped that I could help with
that in the meantime -- hence this patchset.
I'll go ahead and send out the full dm-default-key patches too as an RFC just so
that people can see what it involves exactly. But be warned -- it is a bit ugly
with changes in multiple layers (not just the dm target itself).
- Eric
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption
2024-10-18 2:52 ` Adrian Vovk
2024-10-18 17:33 ` Eric Biggers
@ 2025-07-11 5:30 ` Md Sadre Alam
1 sibling, 0 replies; 10+ messages in thread
From: Md Sadre Alam @ 2025-07-11 5:30 UTC (permalink / raw)
To: Adrian Vovk, Eric Biggers, dm-devel
Cc: linux-block, linux-kernel, Israel Rukshin, Milan Broz,
Mikulas Patocka
Hi Eric,
On 10/18/2024 8:22 AM, Adrian Vovk wrote:
> Hello,
>
> On 10/16/24 19:27, Eric Biggers wrote:
>> From: Eric Biggers <ebiggers@google.com>
>>
>> Add a new device-mapper target "dm-inlinecrypt" that is similar to
>> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
>> This allows it to take advantage of inline encryption hardware such as
>> that commonly built into UFS host controllers.
>
> Wow! Thank you for this patch! This is something we really want in
> systemd and in GNOME, and I came across this patchset on accident while
> trying to find a way to get someone to work on it for us.
>
>> The table syntax matches dm-crypt's, but for now only a stripped-down
>> set of parameters is supported. For example, for now AES-256-XTS is the
>> only supported cipher.
>>
>> dm-inlinecrypt is based on Android's dm-default-key with the
>> controversial passthrough support removed.
>
> That's quite unfortunate. I'd say this passthrough support is probably
> the most powerful thing about dm-default-key.
>
> Could you elaborate on why you removed it? I think it's a necessary
> feature. Enabling multiple layers of encryption stacked on top of each
> other, without paying the cost of double (or more...) encryption, is
> something we really want over in userspace. It'll enable us to stack
> full-disk encryption with encrypted /home directories for each user and
> then stack encrypted per-app data dirs on top of that. I'd say that the
> passthrough support is more important for us than the performance
> benefit of using inline encryption hardware.
>
> Without a solution to layer encryption we can't do a lot of good things
> we want to do in userspace.
>
> As far as I understand, the passthrough support was controversial only
> because previous upstreaming attempts tried to punch holes into dm-crypt
> to make this work. I don't know of an attempt to upstream dm-default-key
> as-is, with passthrough support. As far as I can tell, people even
> seemed open to that idea...
>
> Is there some context I'm missing? Information would be appreciated.
>
>> Note that due to the removal
>> of passthrough support, use of dm-inlinecrypt in combination with
>> fscrypt causes double encryption of file contents (similar to dm-crypt +
>> fscrypt), with the fscrypt layer not being able to use the inline
>> encryption hardware. This makes dm-inlinecrypt unusable on systems such
>> as Android that use fscrypt and where a more optimized approach is
>> needed.
>
> Yeah, sadly you've removed the use-case we're very interested in.
> Generic PC hardware doesn't have blk-crypto hardware as far as I can
> tell, so we don't get the performance benefit of replacing dm-crypt with
> dm-inlinecrypt. However, we do get the performance benefit of the
> passthrough support (if we run this on top of the blk-crypto software
> emulation mode, for instance)
>
>
> Best,
>
> Adrian Vovk
>
>> It is however suitable as a replacement for dm-crypt.
>>
>> Signed-off-by: Eric Biggers <ebiggers@google.com>
>>
Is there any plan to update this series ? IPQ9574 needs this
dm-inlinecrypt driver for secure eMMC feature.
Thanks,
Alam.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-07-11 5:31 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-16 23:27 [RFC PATCH v2 0/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
2024-10-16 23:27 ` [RFC PATCH v2 1/2] block: export blk-crypto symbols required by dm-inlinecrypt Eric Biggers
2024-10-16 23:27 ` [RFC PATCH v2 2/2] dm-inlinecrypt: add target for inline block device encryption Eric Biggers
2024-10-17 19:44 ` Eric Biggers
2024-10-17 20:17 ` Milan Broz
2024-10-17 20:28 ` Eric Biggers
2024-10-17 21:03 ` Milan Broz
2024-10-18 2:52 ` Adrian Vovk
2024-10-18 17:33 ` Eric Biggers
2025-07-11 5:30 ` Md Sadre Alam
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).