* [PATCH v2 0/3] dm-inlinecrypt: add target for inline block device encryption
@ 2026-04-10 13:40 Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 1/3] block: export blk-crypto symbols required by dm-inlinecrypt Linlin Zhang
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Linlin Zhang @ 2026-04-10 13:40 UTC (permalink / raw)
To: linux-block, ebiggers, mpatocka, gmazyland
Cc: linux-kernel, adrianvovk, dm-devel, quic_mdalam, israelr, hch,
axboe
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 1582 bytes --]
This patch series is based on Erics work posted at:
https://lore.kernel.org/all/ecbb7ea8-11f6-30c1-ad77-bd984c52ca33@quicinc.com/
Erics patches introduce a new dm target, dm-inlinecrypt, to support inline
block-device encryption. The implementation builds on the work previously done
in Androids dm-default-key, but intentionally drops passthrough support,
as that functionality does not appear likely to be accepted upstream in the
near future. With this limitation, dm-inlinecrypt is positioned as a
practical replacement for dm-crypt, rather than a general passthrough
mechanism.
On top of Erics series, keyring key support is added in dm-inlinecrypt. Thus,
both keyring key and hex key are feasible for dm-inlinecrypt. In addition,
dm-inlinecrypt.rst is added as the user-guide of dm-inlinecrypt.
V1:
https://lore.kernel.org/all/20260304121729.1532469-1-linlin.zhang@oss.qualcomm.com/
Eric Biggers (2):
block: export blk-crypto symbols required by dm-inlinecrypt
dm-inlinecrypt: add target for inline block device encryption
Linlin Zhang (1):
dm: add documentation for dm-inlinecrypt target
.../device-mapper/dm-inlinecrypt.rst | 122 ++++
block/blk-crypto.c | 3 +
drivers/md/Kconfig | 10 +
drivers/md/Makefile | 1 +
drivers/md/dm-inlinecrypt.c | 559 ++++++++++++++++++
5 files changed, 695 insertions(+)
create mode 100644 Documentation/admin-guide/device-mapper/dm-inlinecrypt.rst
create mode 100644 drivers/md/dm-inlinecrypt.c
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 1/3] block: export blk-crypto symbols required by dm-inlinecrypt
2026-04-10 13:40 [PATCH v2 0/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
@ 2026-04-10 13:40 ` Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target Linlin Zhang
2 siblings, 0 replies; 9+ messages in thread
From: Linlin Zhang @ 2026-04-10 13:40 UTC (permalink / raw)
To: linux-block, ebiggers, mpatocka, gmazyland
Cc: linux-kernel, adrianvovk, dm-devel, quic_mdalam, israelr, hch,
axboe
From: Eric Biggers <ebiggers@google.com>
bio_crypt_set_ctx(), blk_crypto_init_key(), and
blk_crypto_start_using_key() are needed to use inline encryption; see
Documentation/block/inline-encryption.rst. Export them so that
dm-inlinecrypt can use them. The only reason these weren't exported
before was that inline encryption was previously used only by fs/crypto/
which is built-in code.
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
block/blk-crypto.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 856d3c5b1fa0..40a99a859748 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -116,6 +116,7 @@ void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key,
bio->bi_crypt_context = bc;
}
+EXPORT_SYMBOL_GPL(bio_crypt_set_ctx);
void __bio_crypt_free_ctx(struct bio *bio)
{
@@ -349,6 +350,7 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key,
return 0;
}
+EXPORT_SYMBOL_GPL(blk_crypto_init_key);
bool blk_crypto_config_supported_natively(struct block_device *bdev,
const struct blk_crypto_config *cfg)
@@ -399,6 +401,7 @@ int blk_crypto_start_using_key(struct block_device *bdev,
}
return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode);
}
+EXPORT_SYMBOL_GPL(blk_crypto_start_using_key);
/**
* blk_crypto_evict_key() - Evict a blk_crypto_key from a block_device
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption
2026-04-10 13:40 [PATCH v2 0/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 1/3] block: export blk-crypto symbols required by dm-inlinecrypt Linlin Zhang
@ 2026-04-10 13:40 ` Linlin Zhang
2026-04-27 1:19 ` Benjamin Marzinski
2026-04-27 5:23 ` Benjamin Marzinski
2026-04-10 13:40 ` [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target Linlin Zhang
2 siblings, 2 replies; 9+ messages in thread
From: Linlin Zhang @ 2026-04-10 13:40 UTC (permalink / raw)
To: linux-block, ebiggers, mpatocka, gmazyland
Cc: linux-kernel, adrianvovk, dm-devel, quic_mdalam, israelr, hch,
axboe
From: Eric Biggers <ebiggers@google.com>
Add a new device-mapper target "dm-inlinecrypt" that is similar to
dm-crypt but uses the blk-crypto API instead of the regular crypto API.
This allows it to take advantage of inline encryption hardware such as
that commonly built into UFS host controllers.
The table syntax matches dm-crypt's, but for now only a stripped-down
set of parameters is supported. For example, for now AES-256-XTS is the
only supported cipher.
dm-inlinecrypt is based on Android's dm-default-key with the
controversial passthrough support removed. Note that due to the removal
of passthrough support, use of dm-inlinecrypt in combination with
fscrypt causes double encryption of file contents (similar to dm-crypt +
fscrypt), with the fscrypt layer not being able to use the inline
encryption hardware. This makes dm-inlinecrypt unusable on systems such
as Android that use fscrypt and where a more optimized approach is
needed. It is however suitable as a replacement for dm-crypt.
dm-inlinecrypt supports both keyring key and hex key, the former avoids
the key to be exposed in dm-table message. Similar to dm-default-key in
Android, it will fallabck to the software block crypto once the inline
crypto hardware cannot support the expected cipher.
Test:
dmsetup create inlinecrypt_logon --table "0 `blockdev --getsz $1` \
inlinecrypt aes-xts-plain64 :64:logon:fde:dminlinecrypt_test_key 0 $1 0"
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
---
drivers/md/Kconfig | 10 +
drivers/md/Makefile | 1 +
drivers/md/dm-inlinecrypt.c | 559 ++++++++++++++++++++++++++++++++++++
3 files changed, 570 insertions(+)
create mode 100644 drivers/md/dm-inlinecrypt.c
diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
index c58a9a8ea54e..aa541cc22ecc 100644
--- a/drivers/md/Kconfig
+++ b/drivers/md/Kconfig
@@ -313,6 +313,16 @@ config DM_CRYPT
If unsure, say N.
+config DM_INLINECRYPT
+ tristate "Inline encryption target support"
+ depends on BLK_DEV_DM
+ depends on BLK_INLINE_ENCRYPTION
+ help
+ This device-mapper target is similar to dm-crypt, but it uses the
+ blk-crypto API instead of the regular crypto API. This allows it to
+ take advantage of inline encryption hardware such as that commonly
+ built into UFS host controllers.
+
config DM_SNAPSHOT
tristate "Snapshot target"
depends on BLK_DEV_DM
diff --git a/drivers/md/Makefile b/drivers/md/Makefile
index c338cc6fbe2e..517d1f7d8288 100644
--- a/drivers/md/Makefile
+++ b/drivers/md/Makefile
@@ -55,6 +55,7 @@ obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o
obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o
obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
+obj-$(CONFIG_DM_INLINECRYPT) += dm-inlinecrypt.o
obj-$(CONFIG_DM_DELAY) += dm-delay.o
obj-$(CONFIG_DM_DUST) += dm-dust.o
obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
diff --git a/drivers/md/dm-inlinecrypt.c b/drivers/md/dm-inlinecrypt.c
new file mode 100644
index 000000000000..b6e98fdf8af1
--- /dev/null
+++ b/drivers/md/dm-inlinecrypt.c
@@ -0,0 +1,559 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright 2024 Google LLC
+ */
+
+#include <linux/blk-crypto.h>
+#include <linux/ctype.h>
+#include <linux/device-mapper.h>
+#include <linux/hex.h>
+#include <linux/module.h>
+#include <keys/user-type.h>
+
+#define DM_MSG_PREFIX "inlinecrypt"
+
+static const struct dm_inlinecrypt_cipher {
+ const char *name;
+ enum blk_crypto_mode_num mode_num;
+} dm_inlinecrypt_ciphers[] = {
+ {
+ .name = "aes-xts-plain64",
+ .mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
+ },
+};
+
+/**
+ * struct inlinecrypt_ctx - private data of an inlinecrypt target
+ * @dev: the underlying device
+ * @start: starting sector of the range of @dev which this target actually maps.
+ * For this purpose a "sector" is 512 bytes.
+ * @cipher_string: the name of the encryption algorithm being used
+ * @iv_offset: starting offset for IVs. IVs are generated as if the target were
+ * preceded by @iv_offset 512-byte sectors.
+ * @sector_size: crypto sector size in bytes (usually 4096)
+ * @sector_bits: log2(sector_size)
+ * @key: the encryption key to use
+ * @max_dun: the maximum DUN that may be used (computed from other params)
+ */
+struct inlinecrypt_ctx {
+ struct dm_dev *dev;
+ sector_t start;
+ const char *cipher_string;
+ unsigned int key_size;
+ u64 iv_offset;
+ unsigned int sector_size;
+ unsigned int sector_bits;
+ struct blk_crypto_key key;
+ u64 max_dun;
+};
+
+static const struct dm_inlinecrypt_cipher *
+lookup_cipher(const char *cipher_string)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(dm_inlinecrypt_ciphers); i++) {
+ if (strcmp(cipher_string, dm_inlinecrypt_ciphers[i].name) == 0)
+ return &dm_inlinecrypt_ciphers[i];
+ }
+ return NULL;
+}
+
+static void inlinecrypt_dtr(struct dm_target *ti)
+{
+ struct inlinecrypt_ctx *ctx = ti->private;
+
+ if (ctx->dev) {
+ if (ctx->key.size)
+ blk_crypto_evict_key(ctx->dev->bdev, &ctx->key);
+ dm_put_device(ti, ctx->dev);
+ }
+ kfree_sensitive(ctx->cipher_string);
+ kfree_sensitive(ctx);
+}
+
+static bool contains_whitespace(const char *str)
+{
+ while (*str)
+ if (isspace(*str++))
+ return true;
+ return false;
+}
+
+static int set_key_user(struct key *key, char *bin_key,
+ const unsigned int bin_key_size)
+{
+ const struct user_key_payload *ukp;
+
+ ukp = user_key_payload_locked(key);
+ if (!ukp)
+ return -EKEYREVOKED;
+
+ if (bin_key_size != ukp->datalen)
+ return -EINVAL;
+
+ memcpy(bin_key, ukp->data, bin_key_size);
+
+ return 0;
+}
+
+static int inlinecrypt_get_keyring_key(const char *key_string, u8 *bin_key,
+ const unsigned int bin_key_size)
+{
+ char *key_desc;
+ int ret;
+ struct key_type *type;
+ struct key *key;
+ int (*set_key)(struct key *key, char *bin_key,
+ const unsigned int bin_key_size);
+
+ /*
+ * Reject key_string with whitespace. dm core currently lacks code for
+ * proper whitespace escaping in arguments on DM_TABLE_STATUS path.
+ */
+ if (contains_whitespace(key_string)) {
+ DMERR("whitespace chars not allowed in key string");
+ return -EINVAL;
+ }
+
+ /* look for next ':' separating key_type from key_description */
+ key_desc = strchr(key_string, ':');
+ if (!key_desc || key_desc == key_string || !strlen(key_desc + 1))
+ return -EINVAL;
+
+ if (!strncmp(key_string, "logon:", key_desc - key_string + 1)) {
+ type = &key_type_logon;
+ set_key = set_key_user;
+ } else {
+ return -EINVAL;
+ }
+
+ key = request_key(type, key_desc + 1, NULL);
+ if (IS_ERR(key))
+ return PTR_ERR(key);
+
+ down_read(&key->sem);
+
+ ret = set_key(key, (char *)bin_key, bin_key_size);
+ if (ret < 0) {
+ up_read(&key->sem);
+ key_put(key);
+ return ret;
+ }
+
+ up_read(&key->sem);
+ key_put(key);
+
+ return ret;
+}
+
+static int inlinecrypt_get_key(const char *key_string,
+ u8 key[BLK_CRYPTO_MAX_ANY_KEY_SIZE],
+ const unsigned int key_size)
+{
+ int ret = 0;
+
+ /* ':' means the key is in kernel keyring, short-circuit normal key processing */
+ if (key_string[0] == ':') {
+ if (key_size > BLK_CRYPTO_MAX_ANY_KEY_SIZE) {
+ DMERR("Invalid keysize");
+ return -EINVAL;
+ }
+ /* key string should be :<logon|user>:<key_desc> */
+ ret = inlinecrypt_get_keyring_key(key_string + 1, key, key_size);
+ goto out;
+ }
+
+ if (key_size > 2 * BLK_CRYPTO_MAX_ANY_KEY_SIZE
+ || key_size % 2
+ || !key_size) {
+ DMERR("Invalid keysize");
+ return -EINVAL;
+ }
+ if (hex2bin(key, key_string, key_size) != 0)
+ ret = -EINVAL;
+
+out:
+ return ret;
+}
+
+static int get_key_size(char **key_string)
+{
+ char *colon, dummy;
+ int ret;
+
+ if (*key_string[0] != ':')
+ return strlen(*key_string) >> 1;
+
+ /* look for next ':' in key string */
+ colon = strpbrk(*key_string + 1, ":");
+ if (!colon)
+ return -EINVAL;
+
+ if (sscanf(*key_string + 1, "%u%c", &ret, &dummy) != 2 || dummy != ':')
+ return -EINVAL;
+
+ /* remaining key string should be :<logon|user>:<key_desc> */
+ *key_string = colon;
+
+ return ret;
+}
+
+static int inlinecrypt_ctr_optional(struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ struct inlinecrypt_ctx *ctx = ti->private;
+ struct dm_arg_set as;
+ static const struct dm_arg _args[] = {
+ {0, 3, "Invalid number of feature args"},
+ };
+ unsigned int opt_params;
+ const char *opt_string;
+ bool iv_large_sectors = false;
+ char dummy;
+ int err;
+
+ as.argc = argc;
+ as.argv = argv;
+
+ err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
+ if (err)
+ return err;
+
+ while (opt_params--) {
+ opt_string = dm_shift_arg(&as);
+ if (!opt_string) {
+ ti->error = "Not enough feature arguments";
+ return -EINVAL;
+ }
+ if (!strcmp(opt_string, "allow_discards")) {
+ ti->num_discard_bios = 1;
+ } else if (sscanf(opt_string, "sector_size:%u%c",
+ &ctx->sector_size, &dummy) == 1) {
+ if (ctx->sector_size < SECTOR_SIZE ||
+ ctx->sector_size > 4096 ||
+ !is_power_of_2(ctx->sector_size)) {
+ ti->error = "Invalid sector_size";
+ return -EINVAL;
+ }
+ } else if (!strcmp(opt_string, "iv_large_sectors")) {
+ iv_large_sectors = true;
+ } else {
+ ti->error = "Invalid feature arguments";
+ return -EINVAL;
+ }
+ }
+
+ /* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
+ if (ctx->sector_size != SECTOR_SIZE && !iv_large_sectors) {
+ ti->error = "iv_large_sectors must be specified";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*
+ * Construct an inlinecrypt mapping:
+ * <cipher> [<key>|:<key_size>:<logon>:<key_description>] <iv_offset> <dev_path> <start>
+ *
+ * This syntax matches dm-crypt's, but the set of supported functionality has
+ * been stripped down.
+ */
+static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ struct inlinecrypt_ctx *ctx;
+ const struct dm_inlinecrypt_cipher *cipher;
+ u8 raw_key[BLK_CRYPTO_MAX_ANY_KEY_SIZE];
+ unsigned int dun_bytes;
+ unsigned long long tmpll;
+ char dummy;
+ int err;
+
+ if (argc < 5) {
+ ti->error = "Not enough arguments";
+ return -EINVAL;
+ }
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx) {
+ ti->error = "Out of memory";
+ return -ENOMEM;
+ }
+ ti->private = ctx;
+
+ /* <cipher> */
+ ctx->cipher_string = kstrdup(argv[0], GFP_KERNEL);
+ if (!ctx->cipher_string) {
+ ti->error = "Out of memory";
+ err = -ENOMEM;
+ goto bad;
+ }
+ cipher = lookup_cipher(ctx->cipher_string);
+ if (!cipher) {
+ ti->error = "Unsupported cipher";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ /* <key> */
+ ctx->key_size = get_key_size(&argv[1]);
+ if (ctx->key_size < 0) {
+ ti->error = "Cannot parse key size";
+ return -EINVAL;
+ }
+ err = inlinecrypt_get_key(argv[1], raw_key, ctx->key_size);
+ if (err) {
+ ti->error = "Malformed key string";
+ goto bad;
+ }
+
+ /* <iv_offset> */
+ if (sscanf(argv[2], "%llu%c", &ctx->iv_offset, &dummy) != 1) {
+ ti->error = "Invalid iv_offset sector";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ /* <dev_path> */
+ err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
+ &ctx->dev);
+ if (err) {
+ ti->error = "Device lookup failed";
+ goto bad;
+ }
+
+ /* <start> */
+ if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
+ tmpll != (sector_t)tmpll) {
+ ti->error = "Invalid start sector";
+ err = -EINVAL;
+ goto bad;
+ }
+ ctx->start = tmpll;
+
+ /* optional arguments */
+ ctx->sector_size = SECTOR_SIZE;
+ if (argc > 5) {
+ err = inlinecrypt_ctr_optional(ti, argc - 5, &argv[5]);
+ if (err)
+ goto bad;
+ }
+ ctx->sector_bits = ilog2(ctx->sector_size);
+ if (ti->len & ((ctx->sector_size >> SECTOR_SHIFT) - 1)) {
+ ti->error = "Device size is not a multiple of sector_size";
+ err = -EINVAL;
+ goto bad;
+ }
+
+ ctx->max_dun = (ctx->iv_offset + ti->len - 1) >>
+ (ctx->sector_bits - SECTOR_SHIFT);
+ dun_bytes = DIV_ROUND_UP(fls64(ctx->max_dun), 8);
+
+ err = blk_crypto_init_key(&ctx->key, raw_key, ctx->key_size,
+ BLK_CRYPTO_KEY_TYPE_RAW,
+ cipher->mode_num, dun_bytes,
+ ctx->sector_size);
+ if (err) {
+ ti->error = "Error initializing blk-crypto key";
+ goto bad;
+ }
+
+ err = blk_crypto_start_using_key(ctx->dev->bdev, &ctx->key);
+ if (err) {
+ ti->error = "Error starting to use blk-crypto";
+ goto bad;
+ }
+
+ ti->num_flush_bios = 1;
+
+ err = 0;
+ goto out;
+
+bad:
+ inlinecrypt_dtr(ti);
+out:
+ memzero_explicit(raw_key, sizeof(raw_key));
+ return err;
+}
+
+static int inlinecrypt_map(struct dm_target *ti, struct bio *bio)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ sector_t sector_in_target;
+ u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = {};
+
+ bio_set_dev(bio, ctx->dev->bdev);
+
+ /*
+ * If the bio is a device-level request which doesn't target a specific
+ * sector, there's nothing more to do.
+ */
+ if (bio_sectors(bio) == 0)
+ return DM_MAPIO_REMAPPED;
+
+ /*
+ * The bio should never have an encryption context already, since
+ * dm-inlinecrypt doesn't pass through any inline encryption
+ * capabilities to the layer above it.
+ */
+ if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
+ return DM_MAPIO_KILL;
+
+ /* Map the bio's sector to the underlying device. (512-byte sectors) */
+ sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
+ bio->bi_iter.bi_sector = ctx->start + sector_in_target;
+ /*
+ * If the bio doesn't have any data (e.g. if it's a DISCARD request),
+ * there's nothing more to do.
+ */
+ if (!bio_has_data(bio))
+ return DM_MAPIO_REMAPPED;
+
+ /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
+ dun[0] = ctx->iv_offset + sector_in_target; /* 512-byte sectors */
+ if (dun[0] & ((ctx->sector_size >> SECTOR_SHIFT) - 1))
+ return DM_MAPIO_KILL;
+ dun[0] >>= ctx->sector_bits - SECTOR_SHIFT; /* crypto sectors */
+
+ /*
+ * This check isn't necessary as we should have calculated max_dun
+ * correctly, but be safe.
+ */
+ if (WARN_ON_ONCE(dun[0] > ctx->max_dun))
+ return DM_MAPIO_KILL;
+
+ bio_crypt_set_ctx(bio, &ctx->key, dun, GFP_NOIO);
+
+ /*
+ * Since we've added an encryption context to the bio and
+ * blk-crypto-fallback may be needed to process it, it's necessary to
+ * use the fallback-aware bio submission code rather than
+ * unconditionally returning DM_MAPIO_REMAPPED.
+ *
+ * To get the correct accounting for a dm target in the case where
+ * __blk_crypto_submit_bio() doesn't take ownership of the bio (returns
+ * true), call __blk_crypto_submit_bio() directly and return
+ * DM_MAPIO_REMAPPED in that case, rather than relying on
+ * blk_crypto_submit_bio() which calls submit_bio() in that case.
+ */
+ if (__blk_crypto_submit_bio(bio))
+ return DM_MAPIO_REMAPPED;
+ return DM_MAPIO_SUBMITTED;
+}
+
+static void inlinecrypt_status(struct dm_target *ti, status_type_t type,
+ unsigned int status_flags, char *result,
+ unsigned int maxlen)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ unsigned int sz = 0;
+ int num_feature_args = 0;
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ case STATUSTYPE_IMA:
+ result[0] = '\0';
+ break;
+
+ case STATUSTYPE_TABLE:
+ /*
+ * Warning: like dm-crypt, dm-inlinecrypt includes the key in
+ * the returned table. Userspace is responsible for redacting
+ * the key when needed.
+ */
+ DMEMIT("%s %*phN %llu %s %llu", ctx->cipher_string,
+ ctx->key.size, ctx->key.bytes, ctx->iv_offset,
+ ctx->dev->name, ctx->start);
+ num_feature_args += !!ti->num_discard_bios;
+ if (ctx->sector_size != SECTOR_SIZE)
+ num_feature_args += 2;
+ if (num_feature_args != 0) {
+ DMEMIT(" %d", num_feature_args);
+ if (ti->num_discard_bios)
+ DMEMIT(" allow_discards");
+ if (ctx->sector_size != SECTOR_SIZE) {
+ DMEMIT(" sector_size:%u", ctx->sector_size);
+ DMEMIT(" iv_large_sectors");
+ }
+ }
+ break;
+ }
+}
+
+static int inlinecrypt_prepare_ioctl(struct dm_target *ti,
+ struct block_device **bdev, unsigned int cmd,
+ unsigned long arg, bool *forward)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ const struct dm_dev *dev = ctx->dev;
+
+ *bdev = dev->bdev;
+
+ /* Only pass ioctls through if the device sizes match exactly. */
+ return ctx->start != 0 || ti->len != bdev_nr_sectors(dev->bdev);
+}
+
+static int inlinecrypt_iterate_devices(struct dm_target *ti,
+ iterate_devices_callout_fn fn,
+ void *data)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+
+ return fn(ti, ctx->dev, ctx->start, ti->len, data);
+}
+
+#ifdef CONFIG_BLK_DEV_ZONED
+static int inlinecrypt_report_zones(struct dm_target *ti,
+ struct dm_report_zones_args *args,
+ unsigned int nr_zones)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+
+ return dm_report_zones(ctx->dev->bdev, ctx->start,
+ ctx->start + dm_target_offset(ti, args->next_sector),
+ args, nr_zones);
+}
+#else
+#define inlinecrypt_report_zones NULL
+#endif
+
+static void inlinecrypt_io_hints(struct dm_target *ti,
+ struct queue_limits *limits)
+{
+ const struct inlinecrypt_ctx *ctx = ti->private;
+ const unsigned int sector_size = ctx->sector_size;
+
+ limits->logical_block_size =
+ max_t(unsigned int, limits->logical_block_size, sector_size);
+ limits->physical_block_size =
+ max_t(unsigned int, limits->physical_block_size, sector_size);
+ limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
+ limits->dma_alignment = limits->logical_block_size - 1;
+}
+
+static struct target_type inlinecrypt_target = {
+ .name = "inlinecrypt",
+ .version = {1, 0, 0},
+ /*
+ * Do not set DM_TARGET_PASSES_CRYPTO, since dm-inlinecrypt consumes the
+ * crypto capability itself.
+ */
+ .features = DM_TARGET_ZONED_HM,
+ .module = THIS_MODULE,
+ .ctr = inlinecrypt_ctr,
+ .dtr = inlinecrypt_dtr,
+ .map = inlinecrypt_map,
+ .status = inlinecrypt_status,
+ .prepare_ioctl = inlinecrypt_prepare_ioctl,
+ .iterate_devices = inlinecrypt_iterate_devices,
+ .report_zones = inlinecrypt_report_zones,
+ .io_hints = inlinecrypt_io_hints,
+};
+
+module_dm(inlinecrypt);
+
+MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
+MODULE_AUTHOR("Linlin Zhang <linlin.zhang@oss.qualcomm.com>");
+MODULE_DESCRIPTION(DM_NAME " target for inline encryption");
+MODULE_LICENSE("GPL");
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target
2026-04-10 13:40 [PATCH v2 0/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 1/3] block: export blk-crypto symbols required by dm-inlinecrypt Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
@ 2026-04-10 13:40 ` Linlin Zhang
2026-04-10 17:07 ` Milan Broz
2 siblings, 1 reply; 9+ messages in thread
From: Linlin Zhang @ 2026-04-10 13:40 UTC (permalink / raw)
To: linux-block, ebiggers, mpatocka, gmazyland
Cc: linux-kernel, adrianvovk, dm-devel, quic_mdalam, israelr, hch,
axboe
This adds the admin-guide documentation for dm-inlinecrypt.
dm-inlinecrypt.rst is the guide to using dm-inlinecrypt.
Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
---
.../device-mapper/dm-inlinecrypt.rst | 122 ++++++++++++++++++
1 file changed, 122 insertions(+)
create mode 100644 Documentation/admin-guide/device-mapper/dm-inlinecrypt.rst
diff --git a/Documentation/admin-guide/device-mapper/dm-inlinecrypt.rst b/Documentation/admin-guide/device-mapper/dm-inlinecrypt.rst
new file mode 100644
index 000000000000..c302ba73fc38
--- /dev/null
+++ b/Documentation/admin-guide/device-mapper/dm-inlinecrypt.rst
@@ -0,0 +1,122 @@
+========
+dm-inlinecrypt
+========
+
+Device-Mapper's "inlinecrypt" target provides transparent encryption of block devices
+using the inline encryption hardware.
+
+For a more detailed description of inline encryption, see:
+https://docs.kernel.org/block/inline-encryption.html
+
+Parameters::
+
+ <cipher> <key> <iv_offset> <device path> \
+ <offset> [<#opt_params> <opt_params>]
+
+<cipher>
+ Encryption cipher type.
+
+ The cipher specifications format is::
+
+ cipher
+
+ Examples::
+
+ aes-xts-plain64
+
+ The cipher type is correspond one-to-one with encryption modes. For
+ instance, the corresponding crypto mode of aes-xts-plain64 is
+ BLK_ENCRYPTION_MODE_AES_256_XTS.
+
+<key>
+ Key used for encryption. It is encoded either as a hexadecimal number
+ or it can be passed as <key_string> prefixed with single colon
+ character (':') for keys residing in kernel keyring service.
+ You can only use key sizes that are valid for the selected cipher.
+ Note that the size in bytes of a valid key must be in bellow range.
+
+ [BLK_CRYPTO_KEY_TYPE_RAW, BLK_CRYPTO_KEY_TYPE_HW_WRAPPED]
+
+<key_string>
+ The kernel keyring key is identified by string in following format:
+ <key_size>:<key_type>:<key_description>.
+
+<key_size>
+ The encryption key size in bytes. The kernel key payload size must match
+ the value passed in <key_size>.
+
+<key_type>
+ Either 'logon', or 'trusted' kernel key type.
+
+<key_description>
+ The kernel keyring key description inlinecrypt target should look for
+ when loading key of <key_type>.
+
+<iv_offset>
+ The IV offset is a sector count that is added to the sector number
+ before creating the IV.
+
+<device path>
+ This is the device that is going to be used as backend and contains the
+ encrypted data. You can specify it as a path like /dev/xxx or a device
+ number <major>:<minor>.
+
+<offset>
+ Starting sector within the device where the encrypted data begins.
+
+<#opt_params>
+ Number of optional parameters. If there are no optional parameters,
+ the optional parameters section can be skipped or #opt_params can be zero.
+ Otherwise #opt_params is the number of following arguments.
+
+ Example of optional parameters section:
+ allow_discards sector_size:4096 iv_large_sectors
+
+allow_discards
+ Block discard requests (a.k.a. TRIM) are passed through the inlinecrypt
+ device. The default is to ignore discard requests.
+
+ WARNING: Assess the specific security risks carefully before enabling this
+ option. For example, allowing discards on encrypted devices may lead to
+ the leak of information about the ciphertext device (filesystem type,
+ used space etc.) if the discarded blocks can be located easily on the
+ device later.
+
+sector_size:<bytes>
+ Use <bytes> as the encryption unit instead of 512 bytes sectors.
+ This option can be in range 512 - 4096 bytes and must be power of two.
+ Virtual device will announce this size as a minimal IO and logical sector.
+
+iv_large_sectors
+ IV generators will use sector number counted in <sector_size> units
+ instead of default 512 bytes sectors.
+
+ For example, if <sector_size> is 4096 bytes, plain64 IV for the second
+ sector will be 8 (without flag) and 1 if iv_large_sectors is present.
+ The <iv_offset> must be multiple of <sector_size> (in 512 bytes units)
+ if this flag is specified.
+
+Example scripts
+===============
+LUKS (Linux Unified Key Setup) is now the preferred way to set up disk
+encryption with dm-inlinecrypt using the 'cryptsetup' utility, see
+https://gitlab.com/cryptsetup/cryptsetup
+
+::
+
+ #!/bin/sh
+ # Create a inlinecrypt device using dmsetup
+ dmsetup create inlinecrypt1 --table "0 `blockdev --getsz $1` inlinecrypt aes-xts-plain64 babebabebabebabebabebabebabebabebabebabebabebabebabebabebabebabe 0 $1 0"
+
+::
+
+ #!/bin/sh
+ # Create a inlinecrypt device using dmsetup when encryption key is stored in keyring service
+ dmsetup create inlinecrypt2 --table "0 `blockdev --getsz $1` inlinecrypt aes-xts-plain64 :64:logon:fde:dminlinecrypt_test_key 0 $1 0"
+
+::
+
+ #!/bin/sh
+ # Create a inlinecrypt device using cryptsetup and LUKS header with default cipher
+ cryptsetup luksFormat $1
+ cryptsetup luksOpen $1 inlinecrypt1
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target
2026-04-10 13:40 ` [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target Linlin Zhang
@ 2026-04-10 17:07 ` Milan Broz
2026-04-24 13:53 ` Linlin Zhang
0 siblings, 1 reply; 9+ messages in thread
From: Milan Broz @ 2026-04-10 17:07 UTC (permalink / raw)
To: Linlin Zhang, linux-block, ebiggers, mpatocka
Cc: linux-kernel, adrianvovk, dm-devel, quic_mdalam, israelr, hch,
axboe
On 4/10/26 3:40 PM, Linlin Zhang wrote:
> This adds the admin-guide documentation for dm-inlinecrypt.
>
> dm-inlinecrypt.rst is the guide to using dm-inlinecrypt.
>
> Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
> ---
...
> +
> +<cipher>
> + Encryption cipher type.
> +
> + The cipher specifications format is::
> +
> + cipher
> +
> + Examples::
> +
> + aes-xts-plain64
> +
> + The cipher type is correspond one-to-one with encryption modes. For
... with encryption modes supported for inline crypto in block layer?
In your patch only BLK_ENCRYPTION_MODE_AES_256_XTS.
> + instance, the corresponding crypto mode of aes-xts-plain64 is
> + BLK_ENCRYPTION_MODE_AES_256_XTS.
...
> +iv_large_sectors
> + IV generators will use sector number counted in <sector_size> units
> + instead of default 512 bytes sectors.
> +
> + For example, if <sector_size> is 4096 bytes, plain64 IV for the second
> + sector will be 8 (without flag) and 1 if iv_large_sectors is present.
> + The <iv_offset> must be multiple of <sector_size> (in 512 bytes units)
> + if this flag is specified.
Is it true? I see this comment in the code:
/* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
...
> +Example scripts
> +===============
> +LUKS (Linux Unified Key Setup) is now the preferred way to set up disk
> +encryption with dm-inlinecrypt using the 'cryptsetup' utility, see
> +https://gitlab.com/cryptsetup/cryptsetup
Cryptsetup has no support for inlinecrypt and it is question if it should have.
It would require additional options and maybe LUKS2 metadata flag to make it persistent.
How did you test it? Please remove this cryptsetup example.
It can be added later when userspace get this functionality.
...> +
> + #!/bin/sh
> + # Create a inlinecrypt device using cryptsetup and LUKS header with default cipher
> + cryptsetup luksFormat $1
> + cryptsetup luksOpen $1 inlinecrypt1
ditto. This example will use dm-crypt, not dm-inlinecrypt.
Milan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target
2026-04-10 17:07 ` Milan Broz
@ 2026-04-24 13:53 ` Linlin Zhang
0 siblings, 0 replies; 9+ messages in thread
From: Linlin Zhang @ 2026-04-24 13:53 UTC (permalink / raw)
To: Milan Broz, linux-block, ebiggers, mpatocka
Cc: linux-kernel, adrianvovk, dm-devel, quic_mdalam, israelr, hch,
axboe
On 4/11/2026 1:07 AM, Milan Broz wrote:
> On 4/10/26 3:40 PM, Linlin Zhang wrote:
>> This adds the admin-guide documentation for dm-inlinecrypt.
>>
>> dm-inlinecrypt.rst is the guide to using dm-inlinecrypt.
>>
>> Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
>> ---
>
> ...
>
>> +
>> +<cipher>
>> + Encryption cipher type.
>> +
>> + The cipher specifications format is::
>> +
>> + cipher
>> +
>> + Examples::
>> +
>> + aes-xts-plain64
>> +
>> + The cipher type is correspond one-to-one with encryption modes. For
>
> ... with encryption modes supported for inline crypto in block layer?
>
> In your patch only BLK_ENCRYPTION_MODE_AES_256_XTS.
Thanks for your insights!
Yes, here the encryption modes refer to the inline crypto modessupported
by the block layer. Currently, this patch only supports
BLK_ENCRYPTION_MODE_AES_256_XTS.
I will reword it as:
The cipher type corresponds to the encryption modes supported by
inline crypto in the block layer. Currently, only
BLK_ENCRYPTION_MODE_AES_256_XTS (i.e. aes-xts-plain64) is supported.
Could you please let me know if you expect more than that?
>
>> + instance, the corresponding crypto mode of aes-xts-plain64 is
>> + BLK_ENCRYPTION_MODE_AES_256_XTS.
>
> ...
>
>> +iv_large_sectors
>> + IV generators will use sector number counted in <sector_size> units
>> + instead of default 512 bytes sectors.
>> +
>> + For example, if <sector_size> is 4096 bytes, plain64 IV for the second
>> + sector will be 8 (without flag) and 1 if iv_large_sectors is present.
>> + The <iv_offset> must be multiple of <sector_size> (in 512 bytes units)
>> + if this flag is specified.
>
> Is it true? I see this comment in the code:
>
> /* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
Thanks for your comment!
The example is describing the general IV generation semantics of
iv_large_sectors versus the legacy behavior, i.e. how plain64 IVs
would be computed conceptually with and without the flag.
However, for dm-inlinecrypt, the comment you quoted is correct:
iv_large_sectors=false is not implemented. When a sector size
larger than 512 bytes is used, iv_large_sectors is mandatory, and
the legacy 512-byte-based IV behavior is intentionally unsupported.
In the code this is enforced by rejecting configurations where
sector_size != 512 and iv_large_sectors is not specified, so in
practice the “without flag” case is not usable for dm-inlinecrypt.
I reword it as:
iv_large_sectors
Use <sector_size>-based sector numbers for IV generation instead of
512-byte sectors.
For dm-inlinecrypt, this flag must be specified when <sector_size>
is larger than 512 bytes. The legacy 512-byte-based IV behavior is
not supported.
When specified, if <sector_size> is 4096 bytes, plain64 IV for the
second sector will be 1, and <iv_offset> must be a multiple of
<sector_size> (in 512-byte units).
Do think it's enough?
>
> ...
>
>> +Example scripts
>> +===============
>> +LUKS (Linux Unified Key Setup) is now the preferred way to set up disk
>> +encryption with dm-inlinecrypt using the 'cryptsetup' utility, see
>> +https://gitlab.com/cryptsetup/cryptsetup
>
> Cryptsetup has no support for inlinecrypt and it is question if it should have.
> It would require additional options and maybe LUKS2 metadata flag to make it persistent.
>
> How did you test it? Please remove this cryptsetup example.
> It can be added later when userspace get this functionality.
You are right.
cryptsetup currently has no support for dm-inlinecrypt, and the example
would indeed create a dm-crypt device instead. Supporting dm-inlinecrypt
in cryptsetup would require explicit userspace changes and possibly
extensions to LUKS2 metadata to make it persistent.
I did the testing using dmsetup directly, not via cryptsetup/LUKS. And
I'll remove the LUKS/cryptsetup references and examples from the
documentation and leave LUKS integration to be documented once
userspace support exists.
I reword it as:
Currently, dm-inlinecrypt devices must be set up directly using dmsetup.
There is no userspace support yet to integrate dm-inlinecrypt with LUKS
or cryptsetup. In particular, cryptsetup currently only supports
dm-crypt, and cannot be used to create dm-inlinecrypt mappings.
The following examples demonstrate how to create dm-inlinecrypt devices
using dmsetup.
>
> ...> +
>> + #!/bin/sh
>> + # Create a inlinecrypt device using cryptsetup and LUKS header with default cipher
>> + cryptsetup luksFormat $1
>> + cryptsetup luksOpen $1 inlinecrypt1
>
> ditto. This example will use dm-crypt, not dm-inlinecrypt.
ACK
>
> Milan
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption
2026-04-10 13:40 ` [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
@ 2026-04-27 1:19 ` Benjamin Marzinski
2026-04-27 12:20 ` Linlin Zhang
2026-04-27 5:23 ` Benjamin Marzinski
1 sibling, 1 reply; 9+ messages in thread
From: Benjamin Marzinski @ 2026-04-27 1:19 UTC (permalink / raw)
To: Linlin Zhang
Cc: linux-block, ebiggers, mpatocka, gmazyland, linux-kernel,
adrianvovk, dm-devel, quic_mdalam, israelr, hch, axboe
On Fri, Apr 10, 2026 at 06:40:30AM -0700, Linlin Zhang wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> Add a new device-mapper target "dm-inlinecrypt" that is similar to
> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
> This allows it to take advantage of inline encryption hardware such as
> that commonly built into UFS host controllers.
>
> The table syntax matches dm-crypt's, but for now only a stripped-down
> set of parameters is supported. For example, for now AES-256-XTS is the
> only supported cipher.
>
> dm-inlinecrypt is based on Android's dm-default-key with the
> controversial passthrough support removed. Note that due to the removal
> of passthrough support, use of dm-inlinecrypt in combination with
> fscrypt causes double encryption of file contents (similar to dm-crypt +
> fscrypt), with the fscrypt layer not being able to use the inline
> encryption hardware. This makes dm-inlinecrypt unusable on systems such
> as Android that use fscrypt and where a more optimized approach is
> needed. It is however suitable as a replacement for dm-crypt.
>
> dm-inlinecrypt supports both keyring key and hex key, the former avoids
> the key to be exposed in dm-table message. Similar to dm-default-key in
> Android, it will fallabck to the software block crypto once the inline
> crypto hardware cannot support the expected cipher.
>
> Test:
> dmsetup create inlinecrypt_logon --table "0 `blockdev --getsz $1` \
> inlinecrypt aes-xts-plain64 :64:logon:fde:dminlinecrypt_test_key 0 $1 0"
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
> ---
> drivers/md/Kconfig | 10 +
> drivers/md/Makefile | 1 +
> drivers/md/dm-inlinecrypt.c | 559 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 570 insertions(+)
> create mode 100644 drivers/md/dm-inlinecrypt.c
>
> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
> index c58a9a8ea54e..aa541cc22ecc 100644
> --- a/drivers/md/Kconfig
> +++ b/drivers/md/Kconfig
> @@ -313,6 +313,16 @@ config DM_CRYPT
>
> If unsure, say N.
>
> +config DM_INLINECRYPT
> + tristate "Inline encryption target support"
> + depends on BLK_DEV_DM
> + depends on BLK_INLINE_ENCRYPTION
> + help
> + This device-mapper target is similar to dm-crypt, but it uses the
> + blk-crypto API instead of the regular crypto API. This allows it to
> + take advantage of inline encryption hardware such as that commonly
> + built into UFS host controllers.
> +
> config DM_SNAPSHOT
> tristate "Snapshot target"
> depends on BLK_DEV_DM
> diff --git a/drivers/md/Makefile b/drivers/md/Makefile
> index c338cc6fbe2e..517d1f7d8288 100644
> --- a/drivers/md/Makefile
> +++ b/drivers/md/Makefile
> @@ -55,6 +55,7 @@ obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o
> obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
> obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o
> obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
> +obj-$(CONFIG_DM_INLINECRYPT) += dm-inlinecrypt.o
> obj-$(CONFIG_DM_DELAY) += dm-delay.o
> obj-$(CONFIG_DM_DUST) += dm-dust.o
> obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
> diff --git a/drivers/md/dm-inlinecrypt.c b/drivers/md/dm-inlinecrypt.c
> new file mode 100644
> index 000000000000..b6e98fdf8af1
> --- /dev/null
> +++ b/drivers/md/dm-inlinecrypt.c
> @@ -0,0 +1,559 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright 2024 Google LLC
> + */
> +
> +#include <linux/blk-crypto.h>
> +#include <linux/ctype.h>
> +#include <linux/device-mapper.h>
> +#include <linux/hex.h>
> +#include <linux/module.h>
> +#include <keys/user-type.h>
> +
> +#define DM_MSG_PREFIX "inlinecrypt"
> +
> +static const struct dm_inlinecrypt_cipher {
> + const char *name;
> + enum blk_crypto_mode_num mode_num;
> +} dm_inlinecrypt_ciphers[] = {
> + {
> + .name = "aes-xts-plain64",
> + .mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
> + },
> +};
> +
> +/**
> + * struct inlinecrypt_ctx - private data of an inlinecrypt target
> + * @dev: the underlying device
> + * @start: starting sector of the range of @dev which this target actually maps.
> + * For this purpose a "sector" is 512 bytes.
> + * @cipher_string: the name of the encryption algorithm being used
> + * @iv_offset: starting offset for IVs. IVs are generated as if the target were
> + * preceded by @iv_offset 512-byte sectors.
> + * @sector_size: crypto sector size in bytes (usually 4096)
> + * @sector_bits: log2(sector_size)
> + * @key: the encryption key to use
> + * @max_dun: the maximum DUN that may be used (computed from other params)
> + */
> +struct inlinecrypt_ctx {
> + struct dm_dev *dev;
> + sector_t start;
> + const char *cipher_string;
> + unsigned int key_size;
> + u64 iv_offset;
> + unsigned int sector_size;
> + unsigned int sector_bits;
> + struct blk_crypto_key key;
> + u64 max_dun;
> +};
> +
> +static const struct dm_inlinecrypt_cipher *
> +lookup_cipher(const char *cipher_string)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(dm_inlinecrypt_ciphers); i++) {
> + if (strcmp(cipher_string, dm_inlinecrypt_ciphers[i].name) == 0)
> + return &dm_inlinecrypt_ciphers[i];
> + }
> + return NULL;
> +}
> +
> +static void inlinecrypt_dtr(struct dm_target *ti)
> +{
> + struct inlinecrypt_ctx *ctx = ti->private;
> +
> + if (ctx->dev) {
> + if (ctx->key.size)
> + blk_crypto_evict_key(ctx->dev->bdev, &ctx->key);
> + dm_put_device(ti, ctx->dev);
> + }
> + kfree_sensitive(ctx->cipher_string);
> + kfree_sensitive(ctx);
> +}
> +
> +static bool contains_whitespace(const char *str)
> +{
> + while (*str)
> + if (isspace(*str++))
> + return true;
> + return false;
> +}
> +
> +static int set_key_user(struct key *key, char *bin_key,
> + const unsigned int bin_key_size)
> +{
> + const struct user_key_payload *ukp;
> +
> + ukp = user_key_payload_locked(key);
> + if (!ukp)
> + return -EKEYREVOKED;
> +
> + if (bin_key_size != ukp->datalen)
> + return -EINVAL;
> +
> + memcpy(bin_key, ukp->data, bin_key_size);
> +
> + return 0;
> +}
> +
> +static int inlinecrypt_get_keyring_key(const char *key_string, u8 *bin_key,
> + const unsigned int bin_key_size)
> +{
> + char *key_desc;
> + int ret;
> + struct key_type *type;
> + struct key *key;
There's nothing forcing CONFIG_KEYS to be set when CONFIG_DM_INLINECRYPT
is, and without it, struct key won't be defined and this won't compile.
> + int (*set_key)(struct key *key, char *bin_key,
> + const unsigned int bin_key_size);
> +
> + /*
> + * Reject key_string with whitespace. dm core currently lacks code for
> + * proper whitespace escaping in arguments on DM_TABLE_STATUS path.
> + */
> + if (contains_whitespace(key_string)) {
> + DMERR("whitespace chars not allowed in key string");
> + return -EINVAL;
> + }
> +
> + /* look for next ':' separating key_type from key_description */
> + key_desc = strchr(key_string, ':');
> + if (!key_desc || key_desc == key_string || !strlen(key_desc + 1))
> + return -EINVAL;
> +
> + if (!strncmp(key_string, "logon:", key_desc - key_string + 1)) {
> + type = &key_type_logon;
> + set_key = set_key_user;
> + } else {
> + return -EINVAL;
> + }
> +
> + key = request_key(type, key_desc + 1, NULL);
> + if (IS_ERR(key))
> + return PTR_ERR(key);
> +
> + down_read(&key->sem);
> +
> + ret = set_key(key, (char *)bin_key, bin_key_size);
> + if (ret < 0) {
This does the same commands regardless of whether it takes this branch.
> + up_read(&key->sem);
> + key_put(key);
> + return ret;
> + }
> +
> + up_read(&key->sem);
> + key_put(key);
> +
> + return ret;
> +}
> +
> +static int inlinecrypt_get_key(const char *key_string,
> + u8 key[BLK_CRYPTO_MAX_ANY_KEY_SIZE],
> + const unsigned int key_size)
> +{
> + int ret = 0;
> +
> + /* ':' means the key is in kernel keyring, short-circuit normal key processing */
> + if (key_string[0] == ':') {
> + if (key_size > BLK_CRYPTO_MAX_ANY_KEY_SIZE) {
> + DMERR("Invalid keysize");
> + return -EINVAL;
> + }
> + /* key string should be :<logon|user>:<key_desc> */
> + ret = inlinecrypt_get_keyring_key(key_string + 1, key, key_size);
> + goto out;
> + }
> +
> + if (key_size > 2 * BLK_CRYPTO_MAX_ANY_KEY_SIZE
get_key_size() returns the size of the binary key in this case, so
shouldn't this check for "key_size > BLK_CRYPTO_MAX_ANY_KEY_SIZE", and
it seems like the check if the key_size is odd would make more sense in
get_key_size().
> + || key_size % 2
> + || !key_size) {
> + DMERR("Invalid keysize");
> + return -EINVAL;
> + }
> + if (hex2bin(key, key_string, key_size) != 0)
> + ret = -EINVAL;
> +
> +out:
> + return ret;
> +}
> +
> +static int get_key_size(char **key_string)
> +{
> + char *colon, dummy;
> + int ret;
> +
> + if (*key_string[0] != ':')
> + return strlen(*key_string) >> 1;
> +
> + /* look for next ':' in key string */
> + colon = strpbrk(*key_string + 1, ":");
> + if (!colon)
> + return -EINVAL;
> +
> + if (sscanf(*key_string + 1, "%u%c", &ret, &dummy) != 2 || dummy != ':')
> + return -EINVAL;
> +
> + /* remaining key string should be :<logon|user>:<key_desc> */
> + *key_string = colon;
> +
> + return ret;
> +}
> +
> +static int inlinecrypt_ctr_optional(struct dm_target *ti,
> + unsigned int argc, char **argv)
> +{
> + struct inlinecrypt_ctx *ctx = ti->private;
> + struct dm_arg_set as;
> + static const struct dm_arg _args[] = {
> + {0, 3, "Invalid number of feature args"},
> + };
> + unsigned int opt_params;
> + const char *opt_string;
> + bool iv_large_sectors = false;
> + char dummy;
> + int err;
> +
> + as.argc = argc;
> + as.argv = argv;
> +
> + err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
> + if (err)
> + return err;
> +
> + while (opt_params--) {
> + opt_string = dm_shift_arg(&as);
> + if (!opt_string) {
> + ti->error = "Not enough feature arguments";
> + return -EINVAL;
> + }
> + if (!strcmp(opt_string, "allow_discards")) {
> + ti->num_discard_bios = 1;
> + } else if (sscanf(opt_string, "sector_size:%u%c",
> + &ctx->sector_size, &dummy) == 1) {
> + if (ctx->sector_size < SECTOR_SIZE ||
> + ctx->sector_size > 4096 ||
> + !is_power_of_2(ctx->sector_size)) {
> + ti->error = "Invalid sector_size";
> + return -EINVAL;
> + }
> + } else if (!strcmp(opt_string, "iv_large_sectors")) {
> + iv_large_sectors = true;
> + } else {
> + ti->error = "Invalid feature arguments";
> + return -EINVAL;
> + }
> + }
> +
> + /* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
> + if (ctx->sector_size != SECTOR_SIZE && !iv_large_sectors) {
> + ti->error = "iv_large_sectors must be specified";
Since setting sector_size forces setting iv_large_sectors, does it
really need to be a separate parameter? Can't it just be implied by
setting a non-512 sector_size. Is this here to futureproof the table
line?
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +/*
> + * Construct an inlinecrypt mapping:
> + * <cipher> [<key>|:<key_size>:<logon>:<key_description>] <iv_offset> <dev_path> <start>
> + *
> + * This syntax matches dm-crypt's, but the set of supported functionality has
> + * been stripped down.
> + */
> +static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
> +{
> + struct inlinecrypt_ctx *ctx;
> + const struct dm_inlinecrypt_cipher *cipher;
> + u8 raw_key[BLK_CRYPTO_MAX_ANY_KEY_SIZE];
> + unsigned int dun_bytes;
> + unsigned long long tmpll;
> + char dummy;
> + int err;
> +
> + if (argc < 5) {
> + ti->error = "Not enough arguments";
> + return -EINVAL;
> + }
> +
> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> + if (!ctx) {
> + ti->error = "Out of memory";
> + return -ENOMEM;
> + }
> + ti->private = ctx;
> +
> + /* <cipher> */
> + ctx->cipher_string = kstrdup(argv[0], GFP_KERNEL);
> + if (!ctx->cipher_string) {
> + ti->error = "Out of memory";
> + err = -ENOMEM;
> + goto bad;
> + }
> + cipher = lookup_cipher(ctx->cipher_string);
> + if (!cipher) {
> + ti->error = "Unsupported cipher";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + /* <key> */
> + ctx->key_size = get_key_size(&argv[1]);
> + if (ctx->key_size < 0) {
> + ti->error = "Cannot parse key size";
> + return -EINVAL;
> + }
> + err = inlinecrypt_get_key(argv[1], raw_key, ctx->key_size);
> + if (err) {
> + ti->error = "Malformed key string";
> + goto bad;
> + }
> +
> + /* <iv_offset> */
> + if (sscanf(argv[2], "%llu%c", &ctx->iv_offset, &dummy) != 1) {
> + ti->error = "Invalid iv_offset sector";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + /* <dev_path> */
> + err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
> + &ctx->dev);
> + if (err) {
> + ti->error = "Device lookup failed";
> + goto bad;
> + }
> +
> + /* <start> */
> + if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
> + tmpll != (sector_t)tmpll) {
> + ti->error = "Invalid start sector";
> + err = -EINVAL;
> + goto bad;
> + }
> + ctx->start = tmpll;
> +
> + /* optional arguments */
> + ctx->sector_size = SECTOR_SIZE;
> + if (argc > 5) {
> + err = inlinecrypt_ctr_optional(ti, argc - 5, &argv[5]);
> + if (err)
> + goto bad;
> + }
> + ctx->sector_bits = ilog2(ctx->sector_size);
> + if (ti->len & ((ctx->sector_size >> SECTOR_SHIFT) - 1)) {
> + ti->error = "Device size is not a multiple of sector_size";
> + err = -EINVAL;
> + goto bad;
> + }
> +
> + ctx->max_dun = (ctx->iv_offset + ti->len - 1) >>
> + (ctx->sector_bits - SECTOR_SHIFT);
> + dun_bytes = DIV_ROUND_UP(fls64(ctx->max_dun), 8);
> +
> + err = blk_crypto_init_key(&ctx->key, raw_key, ctx->key_size,
> + BLK_CRYPTO_KEY_TYPE_RAW,
> + cipher->mode_num, dun_bytes,
> + ctx->sector_size);
> + if (err) {
> + ti->error = "Error initializing blk-crypto key";
> + goto bad;
> + }
> +
> + err = blk_crypto_start_using_key(ctx->dev->bdev, &ctx->key);
> + if (err) {
> + ti->error = "Error starting to use blk-crypto";
> + goto bad;
> + }
> +
> + ti->num_flush_bios = 1;
> +
> + err = 0;
> + goto out;
> +
> +bad:
> + inlinecrypt_dtr(ti);
> +out:
> + memzero_explicit(raw_key, sizeof(raw_key));
> + return err;
> +}
> +
> +static int inlinecrypt_map(struct dm_target *ti, struct bio *bio)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + sector_t sector_in_target;
> + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = {};
> +
> + bio_set_dev(bio, ctx->dev->bdev);
> +
> + /*
> + * If the bio is a device-level request which doesn't target a specific
> + * sector, there's nothing more to do.
> + */
> + if (bio_sectors(bio) == 0)
> + return DM_MAPIO_REMAPPED;
> +
> + /*
> + * The bio should never have an encryption context already, since
> + * dm-inlinecrypt doesn't pass through any inline encryption
> + * capabilities to the layer above it.
> + */
> + if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
> + return DM_MAPIO_KILL;
> +
> + /* Map the bio's sector to the underlying device. (512-byte sectors) */
> + sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
> + bio->bi_iter.bi_sector = ctx->start + sector_in_target;
> + /*
> + * If the bio doesn't have any data (e.g. if it's a DISCARD request),
> + * there's nothing more to do.
> + */
> + if (!bio_has_data(bio))
> + return DM_MAPIO_REMAPPED;
> +
> + /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
> + dun[0] = ctx->iv_offset + sector_in_target; /* 512-byte sectors */
> + if (dun[0] & ((ctx->sector_size >> SECTOR_SHIFT) - 1))
> + return DM_MAPIO_KILL;
If ctx->iv_offset is not a multiple of ctx->sector_size, this will
always fail. ctx->iv_offset should probably get validated in
inlinecrypt_ctr()
-Ben
> + dun[0] >>= ctx->sector_bits - SECTOR_SHIFT; /* crypto sectors */
> +
> + /*
> + * This check isn't necessary as we should have calculated max_dun
> + * correctly, but be safe.
> + */
> + if (WARN_ON_ONCE(dun[0] > ctx->max_dun))
> + return DM_MAPIO_KILL;
> +
> + bio_crypt_set_ctx(bio, &ctx->key, dun, GFP_NOIO);
> +
> + /*
> + * Since we've added an encryption context to the bio and
> + * blk-crypto-fallback may be needed to process it, it's necessary to
> + * use the fallback-aware bio submission code rather than
> + * unconditionally returning DM_MAPIO_REMAPPED.
> + *
> + * To get the correct accounting for a dm target in the case where
> + * __blk_crypto_submit_bio() doesn't take ownership of the bio (returns
> + * true), call __blk_crypto_submit_bio() directly and return
> + * DM_MAPIO_REMAPPED in that case, rather than relying on
> + * blk_crypto_submit_bio() which calls submit_bio() in that case.
> + */
> + if (__blk_crypto_submit_bio(bio))
> + return DM_MAPIO_REMAPPED;
> + return DM_MAPIO_SUBMITTED;
> +}
> +
> +static void inlinecrypt_status(struct dm_target *ti, status_type_t type,
> + unsigned int status_flags, char *result,
> + unsigned int maxlen)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + unsigned int sz = 0;
> + int num_feature_args = 0;
> +
> + switch (type) {
> + case STATUSTYPE_INFO:
> + case STATUSTYPE_IMA:
> + result[0] = '\0';
> + break;
> +
> + case STATUSTYPE_TABLE:
> + /*
> + * Warning: like dm-crypt, dm-inlinecrypt includes the key in
> + * the returned table. Userspace is responsible for redacting
> + * the key when needed.
> + */
> + DMEMIT("%s %*phN %llu %s %llu", ctx->cipher_string,
> + ctx->key.size, ctx->key.bytes, ctx->iv_offset,
> + ctx->dev->name, ctx->start);
> + num_feature_args += !!ti->num_discard_bios;
> + if (ctx->sector_size != SECTOR_SIZE)
> + num_feature_args += 2;
> + if (num_feature_args != 0) {
> + DMEMIT(" %d", num_feature_args);
> + if (ti->num_discard_bios)
> + DMEMIT(" allow_discards");
> + if (ctx->sector_size != SECTOR_SIZE) {
> + DMEMIT(" sector_size:%u", ctx->sector_size);
> + DMEMIT(" iv_large_sectors");
> + }
> + }
> + break;
> + }
> +}
> +
> +static int inlinecrypt_prepare_ioctl(struct dm_target *ti,
> + struct block_device **bdev, unsigned int cmd,
> + unsigned long arg, bool *forward)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + const struct dm_dev *dev = ctx->dev;
> +
> + *bdev = dev->bdev;
> +
> + /* Only pass ioctls through if the device sizes match exactly. */
> + return ctx->start != 0 || ti->len != bdev_nr_sectors(dev->bdev);
> +}
> +
> +static int inlinecrypt_iterate_devices(struct dm_target *ti,
> + iterate_devices_callout_fn fn,
> + void *data)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> +
> + return fn(ti, ctx->dev, ctx->start, ti->len, data);
> +}
> +
> +#ifdef CONFIG_BLK_DEV_ZONED
> +static int inlinecrypt_report_zones(struct dm_target *ti,
> + struct dm_report_zones_args *args,
> + unsigned int nr_zones)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> +
> + return dm_report_zones(ctx->dev->bdev, ctx->start,
> + ctx->start + dm_target_offset(ti, args->next_sector),
> + args, nr_zones);
> +}
> +#else
> +#define inlinecrypt_report_zones NULL
> +#endif
> +
> +static void inlinecrypt_io_hints(struct dm_target *ti,
> + struct queue_limits *limits)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + const unsigned int sector_size = ctx->sector_size;
> +
> + limits->logical_block_size =
> + max_t(unsigned int, limits->logical_block_size, sector_size);
> + limits->physical_block_size =
> + max_t(unsigned int, limits->physical_block_size, sector_size);
> + limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
> + limits->dma_alignment = limits->logical_block_size - 1;
> +}
> +
> +static struct target_type inlinecrypt_target = {
> + .name = "inlinecrypt",
> + .version = {1, 0, 0},
> + /*
> + * Do not set DM_TARGET_PASSES_CRYPTO, since dm-inlinecrypt consumes the
> + * crypto capability itself.
> + */
> + .features = DM_TARGET_ZONED_HM,
> + .module = THIS_MODULE,
> + .ctr = inlinecrypt_ctr,
> + .dtr = inlinecrypt_dtr,
> + .map = inlinecrypt_map,
> + .status = inlinecrypt_status,
> + .prepare_ioctl = inlinecrypt_prepare_ioctl,
> + .iterate_devices = inlinecrypt_iterate_devices,
> + .report_zones = inlinecrypt_report_zones,
> + .io_hints = inlinecrypt_io_hints,
> +};
> +
> +module_dm(inlinecrypt);
> +
> +MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
> +MODULE_AUTHOR("Linlin Zhang <linlin.zhang@oss.qualcomm.com>");
> +MODULE_DESCRIPTION(DM_NAME " target for inline encryption");
> +MODULE_LICENSE("GPL");
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption
2026-04-10 13:40 ` [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
2026-04-27 1:19 ` Benjamin Marzinski
@ 2026-04-27 5:23 ` Benjamin Marzinski
1 sibling, 0 replies; 9+ messages in thread
From: Benjamin Marzinski @ 2026-04-27 5:23 UTC (permalink / raw)
To: Linlin Zhang
Cc: linux-block, ebiggers, mpatocka, gmazyland, linux-kernel,
adrianvovk, dm-devel, quic_mdalam, israelr, hch, axboe
On Fri, Apr 10, 2026 at 06:40:30AM -0700, Linlin Zhang wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> Add a new device-mapper target "dm-inlinecrypt" that is similar to
> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
> This allows it to take advantage of inline encryption hardware such as
> that commonly built into UFS host controllers.
>
> The table syntax matches dm-crypt's, but for now only a stripped-down
> set of parameters is supported. For example, for now AES-256-XTS is the
> only supported cipher.
>
> dm-inlinecrypt is based on Android's dm-default-key with the
> controversial passthrough support removed. Note that due to the removal
> of passthrough support, use of dm-inlinecrypt in combination with
> fscrypt causes double encryption of file contents (similar to dm-crypt +
> fscrypt), with the fscrypt layer not being able to use the inline
> encryption hardware. This makes dm-inlinecrypt unusable on systems such
> as Android that use fscrypt and where a more optimized approach is
> needed. It is however suitable as a replacement for dm-crypt.
>
> dm-inlinecrypt supports both keyring key and hex key, the former avoids
> the key to be exposed in dm-table message. Similar to dm-default-key in
> Android, it will fallabck to the software block crypto once the inline
> crypto hardware cannot support the expected cipher.
>
> Test:
> dmsetup create inlinecrypt_logon --table "0 `blockdev --getsz $1` \
> inlinecrypt aes-xts-plain64 :64:logon:fde:dminlinecrypt_test_key 0 $1 0"
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
> ---
> drivers/md/Kconfig | 10 +
> drivers/md/Makefile | 1 +
> drivers/md/dm-inlinecrypt.c | 559 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 570 insertions(+)
> create mode 100644 drivers/md/dm-inlinecrypt.c
<snip>
> diff --git a/drivers/md/dm-inlinecrypt.c b/drivers/md/dm-inlinecrypt.c
> new file mode 100644
> index 000000000000..b6e98fdf8af1
> --- /dev/null
> +++ b/drivers/md/dm-inlinecrypt.c
<snip>
> +static int inlinecrypt_map(struct dm_target *ti, struct bio *bio)
> +{
> + const struct inlinecrypt_ctx *ctx = ti->private;
> + sector_t sector_in_target;
> + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = {};
> +
> + bio_set_dev(bio, ctx->dev->bdev);
> +
> + /*
> + * If the bio is a device-level request which doesn't target a specific
> + * sector, there's nothing more to do.
> + */
> + if (bio_sectors(bio) == 0)
> + return DM_MAPIO_REMAPPED;
> +
> + /*
> + * The bio should never have an encryption context already, since
> + * dm-inlinecrypt doesn't pass through any inline encryption
> + * capabilities to the layer above it.
> + */
> + if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
> + return DM_MAPIO_KILL;
> +
> + /* Map the bio's sector to the underlying device. (512-byte sectors) */
> + sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
> + bio->bi_iter.bi_sector = ctx->start + sector_in_target;
> + /*
> + * If the bio doesn't have any data (e.g. if it's a DISCARD request),
> + * there's nothing more to do.
> + */
> + if (!bio_has_data(bio))
> + return DM_MAPIO_REMAPPED;
> +
> + /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
> + dun[0] = ctx->iv_offset + sector_in_target; /* 512-byte sectors */
> + if (dun[0] & ((ctx->sector_size >> SECTOR_SHIFT) - 1))
> + return DM_MAPIO_KILL;
> + dun[0] >>= ctx->sector_bits - SECTOR_SHIFT; /* crypto sectors */
> +
> + /*
> + * This check isn't necessary as we should have calculated max_dun
> + * correctly, but be safe.
> + */
> + if (WARN_ON_ONCE(dun[0] > ctx->max_dun))
> + return DM_MAPIO_KILL;
> +
> + bio_crypt_set_ctx(bio, &ctx->key, dun, GFP_NOIO);
> +
> + /*
> + * Since we've added an encryption context to the bio and
> + * blk-crypto-fallback may be needed to process it, it's necessary to
> + * use the fallback-aware bio submission code rather than
> + * unconditionally returning DM_MAPIO_REMAPPED.
> + *
> + * To get the correct accounting for a dm target in the case where
> + * __blk_crypto_submit_bio() doesn't take ownership of the bio (returns
> + * true), call __blk_crypto_submit_bio() directly and return
> + * DM_MAPIO_REMAPPED in that case, rather than relying on
> + * blk_crypto_submit_bio() which calls submit_bio() in that case.
> + */
> + if (__blk_crypto_submit_bio(bio))
This will still double account for fallback writes (which call
submit_bio() on the encrypted bios, and return DM_MAPIO_SUBMITTED here).
-Ben
> + return DM_MAPIO_REMAPPED;
> + return DM_MAPIO_SUBMITTED;
> +}
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption
2026-04-27 1:19 ` Benjamin Marzinski
@ 2026-04-27 12:20 ` Linlin Zhang
0 siblings, 0 replies; 9+ messages in thread
From: Linlin Zhang @ 2026-04-27 12:20 UTC (permalink / raw)
To: Benjamin Marzinski
Cc: linux-block, ebiggers, mpatocka, gmazyland, linux-kernel,
adrianvovk, dm-devel, quic_mdalam, israelr, hch, axboe
On 4/27/2026 9:19 AM, Benjamin Marzinski wrote:
> On Fri, Apr 10, 2026 at 06:40:30AM -0700, Linlin Zhang wrote:
>> From: Eric Biggers <ebiggers@google.com>
>>
>> Add a new device-mapper target "dm-inlinecrypt" that is similar to
>> dm-crypt but uses the blk-crypto API instead of the regular crypto API.
>> This allows it to take advantage of inline encryption hardware such as
>> that commonly built into UFS host controllers.
>>
>> The table syntax matches dm-crypt's, but for now only a stripped-down
>> set of parameters is supported. For example, for now AES-256-XTS is the
>> only supported cipher.
>>
>> dm-inlinecrypt is based on Android's dm-default-key with the
>> controversial passthrough support removed. Note that due to the removal
>> of passthrough support, use of dm-inlinecrypt in combination with
>> fscrypt causes double encryption of file contents (similar to dm-crypt +
>> fscrypt), with the fscrypt layer not being able to use the inline
>> encryption hardware. This makes dm-inlinecrypt unusable on systems such
>> as Android that use fscrypt and where a more optimized approach is
>> needed. It is however suitable as a replacement for dm-crypt.
>>
>> dm-inlinecrypt supports both keyring key and hex key, the former avoids
>> the key to be exposed in dm-table message. Similar to dm-default-key in
>> Android, it will fallabck to the software block crypto once the inline
>> crypto hardware cannot support the expected cipher.
>>
>> Test:
>> dmsetup create inlinecrypt_logon --table "0 `blockdev --getsz $1` \
>> inlinecrypt aes-xts-plain64 :64:logon:fde:dminlinecrypt_test_key 0 $1 0"
>>
>> Signed-off-by: Eric Biggers <ebiggers@google.com>
>> Signed-off-by: Linlin Zhang <linlin.zhang@oss.qualcomm.com>
>> ---
>> drivers/md/Kconfig | 10 +
>> drivers/md/Makefile | 1 +
>> drivers/md/dm-inlinecrypt.c | 559 ++++++++++++++++++++++++++++++++++++
>> 3 files changed, 570 insertions(+)
>> create mode 100644 drivers/md/dm-inlinecrypt.c
>>
>> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
>> index c58a9a8ea54e..aa541cc22ecc 100644
>> --- a/drivers/md/Kconfig
>> +++ b/drivers/md/Kconfig
>> @@ -313,6 +313,16 @@ config DM_CRYPT
>>
>> If unsure, say N.
>>
>> +config DM_INLINECRYPT
>> + tristate "Inline encryption target support"
>> + depends on BLK_DEV_DM
>> + depends on BLK_INLINE_ENCRYPTION
>> + help
>> + This device-mapper target is similar to dm-crypt, but it uses the
>> + blk-crypto API instead of the regular crypto API. This allows it to
>> + take advantage of inline encryption hardware such as that commonly
>> + built into UFS host controllers.
>> +
>> config DM_SNAPSHOT
>> tristate "Snapshot target"
>> depends on BLK_DEV_DM
>> diff --git a/drivers/md/Makefile b/drivers/md/Makefile
>> index c338cc6fbe2e..517d1f7d8288 100644
>> --- a/drivers/md/Makefile
>> +++ b/drivers/md/Makefile
>> @@ -55,6 +55,7 @@ obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o
>> obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
>> obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o
>> obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
>> +obj-$(CONFIG_DM_INLINECRYPT) += dm-inlinecrypt.o
>> obj-$(CONFIG_DM_DELAY) += dm-delay.o
>> obj-$(CONFIG_DM_DUST) += dm-dust.o
>> obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
>> diff --git a/drivers/md/dm-inlinecrypt.c b/drivers/md/dm-inlinecrypt.c
>> new file mode 100644
>> index 000000000000..b6e98fdf8af1
>> --- /dev/null
>> +++ b/drivers/md/dm-inlinecrypt.c
>> @@ -0,0 +1,559 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Copyright 2024 Google LLC
>> + */
>> +
>> +#include <linux/blk-crypto.h>
>> +#include <linux/ctype.h>
>> +#include <linux/device-mapper.h>
>> +#include <linux/hex.h>
>> +#include <linux/module.h>
>> +#include <keys/user-type.h>
>> +
>> +#define DM_MSG_PREFIX "inlinecrypt"
>> +
>> +static const struct dm_inlinecrypt_cipher {
>> + const char *name;
>> + enum blk_crypto_mode_num mode_num;
>> +} dm_inlinecrypt_ciphers[] = {
>> + {
>> + .name = "aes-xts-plain64",
>> + .mode_num = BLK_ENCRYPTION_MODE_AES_256_XTS,
>> + },
>> +};
>> +
>> +/**
>> + * struct inlinecrypt_ctx - private data of an inlinecrypt target
>> + * @dev: the underlying device
>> + * @start: starting sector of the range of @dev which this target actually maps.
>> + * For this purpose a "sector" is 512 bytes.
>> + * @cipher_string: the name of the encryption algorithm being used
>> + * @iv_offset: starting offset for IVs. IVs are generated as if the target were
>> + * preceded by @iv_offset 512-byte sectors.
>> + * @sector_size: crypto sector size in bytes (usually 4096)
>> + * @sector_bits: log2(sector_size)
>> + * @key: the encryption key to use
>> + * @max_dun: the maximum DUN that may be used (computed from other params)
>> + */
>> +struct inlinecrypt_ctx {
>> + struct dm_dev *dev;
>> + sector_t start;
>> + const char *cipher_string;
>> + unsigned int key_size;
>> + u64 iv_offset;
>> + unsigned int sector_size;
>> + unsigned int sector_bits;
>> + struct blk_crypto_key key;
>> + u64 max_dun;
>> +};
>> +
>> +static const struct dm_inlinecrypt_cipher *
>> +lookup_cipher(const char *cipher_string)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(dm_inlinecrypt_ciphers); i++) {
>> + if (strcmp(cipher_string, dm_inlinecrypt_ciphers[i].name) == 0)
>> + return &dm_inlinecrypt_ciphers[i];
>> + }
>> + return NULL;
>> +}
>> +
>> +static void inlinecrypt_dtr(struct dm_target *ti)
>> +{
>> + struct inlinecrypt_ctx *ctx = ti->private;
>> +
>> + if (ctx->dev) {
>> + if (ctx->key.size)
>> + blk_crypto_evict_key(ctx->dev->bdev, &ctx->key);
>> + dm_put_device(ti, ctx->dev);
>> + }
>> + kfree_sensitive(ctx->cipher_string);
>> + kfree_sensitive(ctx);
>> +}
>> +
>> +static bool contains_whitespace(const char *str)
>> +{
>> + while (*str)
>> + if (isspace(*str++))
>> + return true;
>> + return false;
>> +}
>> +
>> +static int set_key_user(struct key *key, char *bin_key,
>> + const unsigned int bin_key_size)
>> +{
>> + const struct user_key_payload *ukp;
>> +
>> + ukp = user_key_payload_locked(key);
>> + if (!ukp)
>> + return -EKEYREVOKED;
>> +
>> + if (bin_key_size != ukp->datalen)
>> + return -EINVAL;
>> +
>> + memcpy(bin_key, ukp->data, bin_key_size);
>> +
>> + return 0;
>> +}
>> +
>> +static int inlinecrypt_get_keyring_key(const char *key_string, u8 *bin_key,
>> + const unsigned int bin_key_size)
>> +{
>> + char *key_desc;
>> + int ret;
>> + struct key_type *type;
>> + struct key *key;
>
> There's nothing forcing CONFIG_KEYS to be set when CONFIG_DM_INLINECRYPT
> is, and without it, struct key won't be defined and this won't compile.
Thanks for your review!
ACK. I'll add 'depends on' in kconfig file for CONFIG_DM_INLINECRYPT and
corresponding '#ifdef CONFIG_KEYS' in this code.
>
>> + int (*set_key)(struct key *key, char *bin_key,
>> + const unsigned int bin_key_size);
>> +
>> + /*
>> + * Reject key_string with whitespace. dm core currently lacks code for
>> + * proper whitespace escaping in arguments on DM_TABLE_STATUS path.
>> + */
>> + if (contains_whitespace(key_string)) {
>> + DMERR("whitespace chars not allowed in key string");
>> + return -EINVAL;
>> + }
>> +
>> + /* look for next ':' separating key_type from key_description */
>> + key_desc = strchr(key_string, ':');
>> + if (!key_desc || key_desc == key_string || !strlen(key_desc + 1))
>> + return -EINVAL;
>> +
>> + if (!strncmp(key_string, "logon:", key_desc - key_string + 1)) {
>> + type = &key_type_logon;
>> + set_key = set_key_user;
>> + } else {
>> + return -EINVAL;
>> + }
>> +
>> + key = request_key(type, key_desc + 1, NULL);
>> + if (IS_ERR(key))
>> + return PTR_ERR(key);
>> +
>> + down_read(&key->sem);
>> +
>> + ret = set_key(key, (char *)bin_key, bin_key_size);
>> + if (ret < 0) {
>
> This does the same commands regardless of whether it takes this branch.
ACK. Need remove 'if' code block here.
>
>> + up_read(&key->sem);
>> + key_put(key);
>> + return ret;
>> + }
>> +
>> + up_read(&key->sem);
>> + key_put(key);
>> +
>> + return ret;
>> +}
>> +
>> +static int inlinecrypt_get_key(const char *key_string,
>> + u8 key[BLK_CRYPTO_MAX_ANY_KEY_SIZE],
>> + const unsigned int key_size)
>> +{
>> + int ret = 0;
>> +
>> + /* ':' means the key is in kernel keyring, short-circuit normal key processing */
>> + if (key_string[0] == ':') {
>> + if (key_size > BLK_CRYPTO_MAX_ANY_KEY_SIZE) {
>> + DMERR("Invalid keysize");
>> + return -EINVAL;
>> + }
>> + /* key string should be :<logon|user>:<key_desc> */
>> + ret = inlinecrypt_get_keyring_key(key_string + 1, key, key_size);
>> + goto out;
>> + }
>> +
>> + if (key_size > 2 * BLK_CRYPTO_MAX_ANY_KEY_SIZE
>
> get_key_size() returns the size of the binary key in this case, so
> shouldn't this check for "key_size > BLK_CRYPTO_MAX_ANY_KEY_SIZE", and
> it seems like the check if the key_size is odd would make more sense in
> get_key_size().
Thanks for your insight!
ACK. Move the check if the key_size is odd into get_key_size() and only remains
this check for "key_size > BLK_CRYPTO_MAX_ANY_KEY_SIZE" here in this case.
>
>> + || key_size % 2
>> + || !key_size) {
>> + DMERR("Invalid keysize");
>> + return -EINVAL;
>> + }
>> + if (hex2bin(key, key_string, key_size) != 0)
>> + ret = -EINVAL;
>> +
>> +out:
>> + return ret;
>> +}
>> +
>> +static int get_key_size(char **key_string)
>> +{
>> + char *colon, dummy;
>> + int ret;
>> +
>> + if (*key_string[0] != ':')
>> + return strlen(*key_string) >> 1;
>> +
>> + /* look for next ':' in key string */
>> + colon = strpbrk(*key_string + 1, ":");
>> + if (!colon)
>> + return -EINVAL;
>> +
>> + if (sscanf(*key_string + 1, "%u%c", &ret, &dummy) != 2 || dummy != ':')
>> + return -EINVAL;
>> +
>> + /* remaining key string should be :<logon|user>:<key_desc> */
>> + *key_string = colon;
>> +
>> + return ret;
>> +}
>> +
>> +static int inlinecrypt_ctr_optional(struct dm_target *ti,
>> + unsigned int argc, char **argv)
>> +{
>> + struct inlinecrypt_ctx *ctx = ti->private;
>> + struct dm_arg_set as;
>> + static const struct dm_arg _args[] = {
>> + {0, 3, "Invalid number of feature args"},
>> + };
>> + unsigned int opt_params;
>> + const char *opt_string;
>> + bool iv_large_sectors = false;
>> + char dummy;
>> + int err;
>> +
>> + as.argc = argc;
>> + as.argv = argv;
>> +
>> + err = dm_read_arg_group(_args, &as, &opt_params, &ti->error);
>> + if (err)
>> + return err;
>> +
>> + while (opt_params--) {
>> + opt_string = dm_shift_arg(&as);
>> + if (!opt_string) {
>> + ti->error = "Not enough feature arguments";
>> + return -EINVAL;
>> + }
>> + if (!strcmp(opt_string, "allow_discards")) {
>> + ti->num_discard_bios = 1;
>> + } else if (sscanf(opt_string, "sector_size:%u%c",
>> + &ctx->sector_size, &dummy) == 1) {
>> + if (ctx->sector_size < SECTOR_SIZE ||
>> + ctx->sector_size > 4096 ||
>> + !is_power_of_2(ctx->sector_size)) {
>> + ti->error = "Invalid sector_size";
>> + return -EINVAL;
>> + }
>> + } else if (!strcmp(opt_string, "iv_large_sectors")) {
>> + iv_large_sectors = true;
>> + } else {
>> + ti->error = "Invalid feature arguments";
>> + return -EINVAL;
>> + }
>> + }
>> +
>> + /* dm-inlinecrypt doesn't implement iv_large_sectors=false. */
>> + if (ctx->sector_size != SECTOR_SIZE && !iv_large_sectors) {
>> + ti->error = "iv_large_sectors must be specified";
>
> Since setting sector_size forces setting iv_large_sectors, does it
> really need to be a separate parameter? Can't it just be implied by
> setting a non-512 sector_size. Is this here to futureproof the table
> line?
Thanks for your question!
Although a non-512 sector_size effectively requires large-sector IV semantics
for dm-inlinecrypt, iv_large_sectors is intentionally kept as an explicit table
parameter.
iv_large_sectors affects IV generation and the on-disk encryption format.
Inferring it from sector_size would silently change encryption semantics and
could break compatibility with existing data. Requiring it to be explicitly
specified forces userspace to consciously opt into the correct IV behavior,
rather than having it implied or overridden internally.
>
>> + return -EINVAL;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +/*
>> + * Construct an inlinecrypt mapping:
>> + * <cipher> [<key>|:<key_size>:<logon>:<key_description>] <iv_offset> <dev_path> <start>
>> + *
>> + * This syntax matches dm-crypt's, but the set of supported functionality has
>> + * been stripped down.
>> + */
>> +static int inlinecrypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
>> +{
>> + struct inlinecrypt_ctx *ctx;
>> + const struct dm_inlinecrypt_cipher *cipher;
>> + u8 raw_key[BLK_CRYPTO_MAX_ANY_KEY_SIZE];
>> + unsigned int dun_bytes;
>> + unsigned long long tmpll;
>> + char dummy;
>> + int err;
>> +
>> + if (argc < 5) {
>> + ti->error = "Not enough arguments";
>> + return -EINVAL;
>> + }
>> +
>> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
>> + if (!ctx) {
>> + ti->error = "Out of memory";
>> + return -ENOMEM;
>> + }
>> + ti->private = ctx;
>> +
>> + /* <cipher> */
>> + ctx->cipher_string = kstrdup(argv[0], GFP_KERNEL);
>> + if (!ctx->cipher_string) {
>> + ti->error = "Out of memory";
>> + err = -ENOMEM;
>> + goto bad;
>> + }
>> + cipher = lookup_cipher(ctx->cipher_string);
>> + if (!cipher) {
>> + ti->error = "Unsupported cipher";
>> + err = -EINVAL;
>> + goto bad;
>> + }
>> +
>> + /* <key> */
>> + ctx->key_size = get_key_size(&argv[1]);
>> + if (ctx->key_size < 0) {
>> + ti->error = "Cannot parse key size";
>> + return -EINVAL;
>> + }
>> + err = inlinecrypt_get_key(argv[1], raw_key, ctx->key_size);
>> + if (err) {
>> + ti->error = "Malformed key string";
>> + goto bad;
>> + }
>> +
>> + /* <iv_offset> */
>> + if (sscanf(argv[2], "%llu%c", &ctx->iv_offset, &dummy) != 1) {
>> + ti->error = "Invalid iv_offset sector";
>> + err = -EINVAL;
>> + goto bad;
>> + }
>> +
>> + /* <dev_path> */
>> + err = dm_get_device(ti, argv[3], dm_table_get_mode(ti->table),
>> + &ctx->dev);
>> + if (err) {
>> + ti->error = "Device lookup failed";
>> + goto bad;
>> + }
>> +
>> + /* <start> */
>> + if (sscanf(argv[4], "%llu%c", &tmpll, &dummy) != 1 ||
>> + tmpll != (sector_t)tmpll) {
>> + ti->error = "Invalid start sector";
>> + err = -EINVAL;
>> + goto bad;
>> + }
>> + ctx->start = tmpll;
>> +
>> + /* optional arguments */
>> + ctx->sector_size = SECTOR_SIZE;
>> + if (argc > 5) {
>> + err = inlinecrypt_ctr_optional(ti, argc - 5, &argv[5]);
>> + if (err)
>> + goto bad;
>> + }
>> + ctx->sector_bits = ilog2(ctx->sector_size);
>> + if (ti->len & ((ctx->sector_size >> SECTOR_SHIFT) - 1)) {
>> + ti->error = "Device size is not a multiple of sector_size";
>> + err = -EINVAL;
>> + goto bad;
>> + }
>> +
>> + ctx->max_dun = (ctx->iv_offset + ti->len - 1) >>
>> + (ctx->sector_bits - SECTOR_SHIFT);
>> + dun_bytes = DIV_ROUND_UP(fls64(ctx->max_dun), 8);
>> +
>> + err = blk_crypto_init_key(&ctx->key, raw_key, ctx->key_size,
>> + BLK_CRYPTO_KEY_TYPE_RAW,
>> + cipher->mode_num, dun_bytes,
>> + ctx->sector_size);
>> + if (err) {
>> + ti->error = "Error initializing blk-crypto key";
>> + goto bad;
>> + }
>> +
>> + err = blk_crypto_start_using_key(ctx->dev->bdev, &ctx->key);
>> + if (err) {
>> + ti->error = "Error starting to use blk-crypto";
>> + goto bad;
>> + }
>> +
>> + ti->num_flush_bios = 1;
>> +
>> + err = 0;
>> + goto out;
>> +
>> +bad:
>> + inlinecrypt_dtr(ti);
>> +out:
>> + memzero_explicit(raw_key, sizeof(raw_key));
>> + return err;
>> +}
>> +
>> +static int inlinecrypt_map(struct dm_target *ti, struct bio *bio)
>> +{
>> + const struct inlinecrypt_ctx *ctx = ti->private;
>> + sector_t sector_in_target;
>> + u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE] = {};
>> +
>> + bio_set_dev(bio, ctx->dev->bdev);
>> +
>> + /*
>> + * If the bio is a device-level request which doesn't target a specific
>> + * sector, there's nothing more to do.
>> + */
>> + if (bio_sectors(bio) == 0)
>> + return DM_MAPIO_REMAPPED;
>> +
>> + /*
>> + * The bio should never have an encryption context already, since
>> + * dm-inlinecrypt doesn't pass through any inline encryption
>> + * capabilities to the layer above it.
>> + */
>> + if (WARN_ON_ONCE(bio_has_crypt_ctx(bio)))
>> + return DM_MAPIO_KILL;
>> +
>> + /* Map the bio's sector to the underlying device. (512-byte sectors) */
>> + sector_in_target = dm_target_offset(ti, bio->bi_iter.bi_sector);
>> + bio->bi_iter.bi_sector = ctx->start + sector_in_target;
>> + /*
>> + * If the bio doesn't have any data (e.g. if it's a DISCARD request),
>> + * there's nothing more to do.
>> + */
>> + if (!bio_has_data(bio))
>> + return DM_MAPIO_REMAPPED;
>> +
>> + /* Calculate the DUN and enforce data-unit (crypto sector) alignment. */
>> + dun[0] = ctx->iv_offset + sector_in_target; /* 512-byte sectors */
>> + if (dun[0] & ((ctx->sector_size >> SECTOR_SHIFT) - 1))
>> + return DM_MAPIO_KILL;
>
> If ctx->iv_offset is not a multiple of ctx->sector_size, this will
> always fail. ctx->iv_offset should probably get validated in
> inlinecrypt_ctr()
ACK
Yes, this assumes iv_offset is aligned to sector_size when large crypto
sectors are used. That’s a requirement of dm-inlinecrypt semantics, and
adding an explicit check in inlinecrypt_ctr() would make this fail earlier
and more clearly.
>
> -Ben
>
>> + dun[0] >>= ctx->sector_bits - SECTOR_SHIFT; /* crypto sectors */
>> +
>> + /*
>> + * This check isn't necessary as we should have calculated max_dun
>> + * correctly, but be safe.
>> + */
>> + if (WARN_ON_ONCE(dun[0] > ctx->max_dun))
>> + return DM_MAPIO_KILL;
>> +
>> + bio_crypt_set_ctx(bio, &ctx->key, dun, GFP_NOIO);
>> +
>> + /*
>> + * Since we've added an encryption context to the bio and
>> + * blk-crypto-fallback may be needed to process it, it's necessary to
>> + * use the fallback-aware bio submission code rather than
>> + * unconditionally returning DM_MAPIO_REMAPPED.
>> + *
>> + * To get the correct accounting for a dm target in the case where
>> + * __blk_crypto_submit_bio() doesn't take ownership of the bio (returns
>> + * true), call __blk_crypto_submit_bio() directly and return
>> + * DM_MAPIO_REMAPPED in that case, rather than relying on
>> + * blk_crypto_submit_bio() which calls submit_bio() in that case.
>> + */
>> + if (__blk_crypto_submit_bio(bio))
>> + return DM_MAPIO_REMAPPED;
>> + return DM_MAPIO_SUBMITTED;
>> +}
>> +
>> +static void inlinecrypt_status(struct dm_target *ti, status_type_t type,
>> + unsigned int status_flags, char *result,
>> + unsigned int maxlen)
>> +{
>> + const struct inlinecrypt_ctx *ctx = ti->private;
>> + unsigned int sz = 0;
>> + int num_feature_args = 0;
>> +
>> + switch (type) {
>> + case STATUSTYPE_INFO:
>> + case STATUSTYPE_IMA:
>> + result[0] = '\0';
>> + break;
>> +
>> + case STATUSTYPE_TABLE:
>> + /*
>> + * Warning: like dm-crypt, dm-inlinecrypt includes the key in
>> + * the returned table. Userspace is responsible for redacting
>> + * the key when needed.
>> + */
>> + DMEMIT("%s %*phN %llu %s %llu", ctx->cipher_string,
>> + ctx->key.size, ctx->key.bytes, ctx->iv_offset,
>> + ctx->dev->name, ctx->start);
>> + num_feature_args += !!ti->num_discard_bios;
>> + if (ctx->sector_size != SECTOR_SIZE)
>> + num_feature_args += 2;
>> + if (num_feature_args != 0) {
>> + DMEMIT(" %d", num_feature_args);
>> + if (ti->num_discard_bios)
>> + DMEMIT(" allow_discards");
>> + if (ctx->sector_size != SECTOR_SIZE) {
>> + DMEMIT(" sector_size:%u", ctx->sector_size);
>> + DMEMIT(" iv_large_sectors");
>> + }
>> + }
>> + break;
>> + }
>> +}
>> +
>> +static int inlinecrypt_prepare_ioctl(struct dm_target *ti,
>> + struct block_device **bdev, unsigned int cmd,
>> + unsigned long arg, bool *forward)
>> +{
>> + const struct inlinecrypt_ctx *ctx = ti->private;
>> + const struct dm_dev *dev = ctx->dev;
>> +
>> + *bdev = dev->bdev;
>> +
>> + /* Only pass ioctls through if the device sizes match exactly. */
>> + return ctx->start != 0 || ti->len != bdev_nr_sectors(dev->bdev);
>> +}
>> +
>> +static int inlinecrypt_iterate_devices(struct dm_target *ti,
>> + iterate_devices_callout_fn fn,
>> + void *data)
>> +{
>> + const struct inlinecrypt_ctx *ctx = ti->private;
>> +
>> + return fn(ti, ctx->dev, ctx->start, ti->len, data);
>> +}
>> +
>> +#ifdef CONFIG_BLK_DEV_ZONED
>> +static int inlinecrypt_report_zones(struct dm_target *ti,
>> + struct dm_report_zones_args *args,
>> + unsigned int nr_zones)
>> +{
>> + const struct inlinecrypt_ctx *ctx = ti->private;
>> +
>> + return dm_report_zones(ctx->dev->bdev, ctx->start,
>> + ctx->start + dm_target_offset(ti, args->next_sector),
>> + args, nr_zones);
>> +}
>> +#else
>> +#define inlinecrypt_report_zones NULL
>> +#endif
>> +
>> +static void inlinecrypt_io_hints(struct dm_target *ti,
>> + struct queue_limits *limits)
>> +{
>> + const struct inlinecrypt_ctx *ctx = ti->private;
>> + const unsigned int sector_size = ctx->sector_size;
>> +
>> + limits->logical_block_size =
>> + max_t(unsigned int, limits->logical_block_size, sector_size);
>> + limits->physical_block_size =
>> + max_t(unsigned int, limits->physical_block_size, sector_size);
>> + limits->io_min = max_t(unsigned int, limits->io_min, sector_size);
>> + limits->dma_alignment = limits->logical_block_size - 1;
>> +}
>> +
>> +static struct target_type inlinecrypt_target = {
>> + .name = "inlinecrypt",
>> + .version = {1, 0, 0},
>> + /*
>> + * Do not set DM_TARGET_PASSES_CRYPTO, since dm-inlinecrypt consumes the
>> + * crypto capability itself.
>> + */
>> + .features = DM_TARGET_ZONED_HM,
>> + .module = THIS_MODULE,
>> + .ctr = inlinecrypt_ctr,
>> + .dtr = inlinecrypt_dtr,
>> + .map = inlinecrypt_map,
>> + .status = inlinecrypt_status,
>> + .prepare_ioctl = inlinecrypt_prepare_ioctl,
>> + .iterate_devices = inlinecrypt_iterate_devices,
>> + .report_zones = inlinecrypt_report_zones,
>> + .io_hints = inlinecrypt_io_hints,
>> +};
>> +
>> +module_dm(inlinecrypt);
>> +
>> +MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
>> +MODULE_AUTHOR("Linlin Zhang <linlin.zhang@oss.qualcomm.com>");
>> +MODULE_DESCRIPTION(DM_NAME " target for inline encryption");
>> +MODULE_LICENSE("GPL");
>> --
>> 2.34.1
>>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-04-27 12:20 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-10 13:40 [PATCH v2 0/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 1/3] block: export blk-crypto symbols required by dm-inlinecrypt Linlin Zhang
2026-04-10 13:40 ` [PATCH v2 2/3] dm-inlinecrypt: add target for inline block device encryption Linlin Zhang
2026-04-27 1:19 ` Benjamin Marzinski
2026-04-27 12:20 ` Linlin Zhang
2026-04-27 5:23 ` Benjamin Marzinski
2026-04-10 13:40 ` [PATCH v2 3/3] dm: add documentation for dm-inlinecrypt target Linlin Zhang
2026-04-10 17:07 ` Milan Broz
2026-04-24 13:53 ` Linlin Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox