linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
@ 2024-12-13  4:19 Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 01/15] ufs: qcom: fix crypto key eviction Eric Biggers
                   ` (19 more replies)
  0 siblings, 20 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson

This patchset is based on next-20241212 and is also available in git via:

    git fetch https://git.kernel.org/pub/scm/fs/fscrypt/linux.git wrapped-keys-v10

This patchset adds support for hardware-wrapped inline encryption keys, a
security feature supported by some SoCs.  It adds the block and fscrypt
framework for the feature as well as support for it with UFS on Qualcomm SoCs.

This feature is described in full detail in the included Documentation changes.
But to summarize, hardware-wrapped keys are inline encryption keys that are
wrapped (encrypted) by a key internal to the hardware so that they can only be
unwrapped (decrypted) by the hardware.  Initially keys are wrapped with a
permanent hardware key, but during actual use they are re-wrapped with a
per-boot ephemeral key for improved security.  The hardware supports importing
keys as well as generating keys itself.

This differs from the existing support for hardware-wrapped keys in the kernel
crypto API (also called "hardware-bound keys" in some places) in the same way
that the crypto API differs from blk-crypto: the crypto API is for general
crypto operations, whereas blk-crypto is for inline storage encryption.

This feature is already being used by Android downstream for several years
(https://source.android.com/docs/security/features/encryption/hw-wrapped-keys),
but on other platforms userspace support will be provided via fscryptctl and
tests via xfstests (I have some old patches for this that need to be updated).

Maintainers, please consider merging the following preparatory patches for 6.14:

  - UFS / SCSI tree: patches 1-4
  - MMC tree: patches 5-7
  - Qualcomm / MSM tree: patch 8

Changed in v10:
  - Fixed bugs in qcom_scm_derive_sw_secret() and cqhci_crypto_init().
  - Added "ufs: qcom: fix crypto key eviction" and
    "mmc: sdhci-msm: fix crypto key eviction".
  - Split removing ufs_hba_variant_ops::program_key into its own patch.
  - Minor cleanups.
  - Added Tested-by.

Changed in v9 (relative to v7 patchset from Bartosz Golaszewski):
  - ufs-qcom and sdhci-msm now just initialize the blk_crypto_profile
    themselves, like what ufs-exynos was doing.  This avoids needing to add all
    the host-specific hooks for wrapped key support to the MMC and UFS core
    drivers.
  - When passing the blk_crypto_key further down the stack, it now replaces
    parameters like the algorithm ID, to avoid creating two sources of truth.
  - The module parameter qcom_ice.use_wrapped_keys should work correctly now.
  - The fscrypt support no longer uses a policy flag to indicate when a file is
    protected by a HW-wrapped key, since it was already implied by the file's
    key identifier being that of a HW-wrapped key.  Originally there was an
    issue where raw and HW-wrapped keys could share key identifiers, but I had
    fixed that earlier by introducing a new HKDF context byte.
  - The term "standard keys" is no longer used.  Now "raw keys" is consistently
    used instead.  I've found that people find the term "raw keys" to be more
    intuitive.  Also HW-wrapped keys could in principle be standardized.
  - I've reordered the patchset to place preparatory patches that don't depend
    on the actual HW-wrapped key support first.

For older changelogs, see
https://lore.kernel.org/r/20241202-wrapped-keys-v7-0-67c3ca3f3282@linaro.org and
https://lore.kernel.org/r/20231104211259.17448-1-ebiggers@kernel.org

Eric Biggers (13):
  ufs: qcom: fix crypto key eviction
  ufs: crypto: add ufs_hba_from_crypto_profile()
  ufs: qcom: convert to use UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE
  ufs: crypto: remove ufs_hba_variant_ops::program_key
  mmc: sdhci-msm: fix crypto key eviction
  mmc: crypto: add mmc_from_crypto_profile()
  mmc: sdhci-msm: convert to use custom crypto profile
  soc: qcom: ice: make qcom_ice_program_key() take struct blk_crypto_key
  blk-crypto: add basic hardware-wrapped key support
  blk-crypto: show supported key types in sysfs
  blk-crypto: add ioctls to create and prepare hardware-wrapped keys
  fscrypt: add support for hardware-wrapped keys
  ufs: qcom: add support for wrapped keys

Gaurav Kashyap (2):
  firmware: qcom: scm: add calls for wrapped key support
  soc: qcom: ice: add HWKM support to the ICE driver

 Documentation/ABI/stable/sysfs-block          |  18 +
 Documentation/block/inline-encryption.rst     | 251 +++++++++++-
 Documentation/filesystems/fscrypt.rst         | 201 +++++++--
 .../userspace-api/ioctl/ioctl-number.rst      |   2 +
 block/blk-crypto-fallback.c                   |   7 +-
 block/blk-crypto-internal.h                   |  10 +
 block/blk-crypto-profile.c                    | 103 +++++
 block/blk-crypto-sysfs.c                      |  35 ++
 block/blk-crypto.c                            | 196 ++++++++-
 block/ioctl.c                                 |   5 +
 drivers/firmware/qcom/qcom_scm.c              | 214 ++++++++++
 drivers/firmware/qcom/qcom_scm.h              |   4 +
 drivers/md/dm-table.c                         |   1 +
 drivers/mmc/host/cqhci-crypto.c               |  46 +--
 drivers/mmc/host/cqhci.h                      |   8 +-
 drivers/mmc/host/sdhci-msm.c                  | 101 +++--
 drivers/soc/qcom/ice.c                        | 383 +++++++++++++++++-
 drivers/ufs/core/ufshcd-crypto.c              |  33 +-
 drivers/ufs/host/ufs-exynos.c                 |   3 +-
 drivers/ufs/host/ufs-qcom.c                   | 136 +++++--
 fs/crypto/fscrypt_private.h                   |  75 +++-
 fs/crypto/hkdf.c                              |   4 +-
 fs/crypto/inline_crypt.c                      |  42 +-
 fs/crypto/keyring.c                           | 157 +++++--
 fs/crypto/keysetup.c                          |  63 ++-
 fs/crypto/keysetup_v1.c                       |   4 +-
 include/linux/blk-crypto-profile.h            |  73 ++++
 include/linux/blk-crypto.h                    |  73 +++-
 include/linux/firmware/qcom/qcom_scm.h        |   8 +
 include/linux/mmc/host.h                      |   8 +
 include/soc/qcom/ice.h                        |  34 +-
 include/uapi/linux/blk-crypto.h               |  44 ++
 include/uapi/linux/fs.h                       |   6 +-
 include/uapi/linux/fscrypt.h                  |   7 +-
 include/ufs/ufshcd.h                          |  11 +-
 35 files changed, 2092 insertions(+), 274 deletions(-)
 create mode 100644 include/uapi/linux/blk-crypto.h


base-commit: 3e42dc9229c5950e84b1ed705f94ed75ed208228
-- 
2.47.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v10 01/15] ufs: qcom: fix crypto key eviction
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 02/15] ufs: crypto: add ufs_hba_from_crypto_profile() Eric Biggers
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, stable, Abel Vesa

From: Eric Biggers <ebiggers@google.com>

Commit 56541c7c4468 ("scsi: ufs: ufs-qcom: Switch to the new ICE API")
introduced an incorrect check of the algorithm ID into the key eviction
path, and thus qcom_ice_evict_key() is no longer ever called.  Fix it.

Fixes: 56541c7c4468 ("scsi: ufs: ufs-qcom: Switch to the new ICE API")
Cc: stable@vger.kernel.org
Cc: Abel Vesa <abel.vesa@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/ufs/host/ufs-qcom.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 68040b2ab5f8..e33ae71245dd 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -153,27 +153,25 @@ static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
 				    const union ufs_crypto_cfg_entry *cfg,
 				    int slot)
 {
 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
 	union ufs_crypto_cap_entry cap;
-	bool config_enable =
-		cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE;
+
+	if (!(cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE))
+		return qcom_ice_evict_key(host->ice, slot);
 
 	/* Only AES-256-XTS has been tested so far. */
 	cap = hba->crypto_cap_array[cfg->crypto_cap_idx];
 	if (cap.algorithm_id != UFS_CRYPTO_ALG_AES_XTS ||
 	    cap.key_size != UFS_CRYPTO_KEY_SIZE_256)
 		return -EOPNOTSUPP;
 
-	if (config_enable)
-		return qcom_ice_program_key(host->ice,
-					    QCOM_ICE_CRYPTO_ALG_AES_XTS,
-					    QCOM_ICE_CRYPTO_KEY_SIZE_256,
-					    cfg->crypto_key,
-					    cfg->data_unit_size, slot);
-	else
-		return qcom_ice_evict_key(host->ice, slot);
+	return qcom_ice_program_key(host->ice,
+				    QCOM_ICE_CRYPTO_ALG_AES_XTS,
+				    QCOM_ICE_CRYPTO_KEY_SIZE_256,
+				    cfg->crypto_key,
+				    cfg->data_unit_size, slot);
 }
 
 #else
 
 #define ufs_qcom_ice_program_key NULL
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 02/15] ufs: crypto: add ufs_hba_from_crypto_profile()
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 01/15] ufs: qcom: fix crypto key eviction Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 03/15] ufs: qcom: convert to use UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE Eric Biggers
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

Add a helper function that encapsulates a container_of expression.  For
now there are two users but soon there will be more.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/ufs/core/ufshcd-crypto.c | 6 ++----
 include/ufs/ufshcd.h             | 8 ++++++++
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/ufs/core/ufshcd-crypto.c b/drivers/ufs/core/ufshcd-crypto.c
index a714dad82cd1..0cb425ef618e 100644
--- a/drivers/ufs/core/ufshcd-crypto.c
+++ b/drivers/ufs/core/ufshcd-crypto.c
@@ -50,12 +50,11 @@ static int ufshcd_program_key(struct ufs_hba *hba,
 
 static int ufshcd_crypto_keyslot_program(struct blk_crypto_profile *profile,
 					 const struct blk_crypto_key *key,
 					 unsigned int slot)
 {
-	struct ufs_hba *hba =
-		container_of(profile, struct ufs_hba, crypto_profile);
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
 	const union ufs_crypto_cap_entry *ccap_array = hba->crypto_cap_array;
 	const struct ufs_crypto_alg_entry *alg =
 			&ufs_crypto_algs[key->crypto_cfg.crypto_mode];
 	u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512;
 	int i;
@@ -97,12 +96,11 @@ static int ufshcd_crypto_keyslot_program(struct blk_crypto_profile *profile,
 
 static int ufshcd_crypto_keyslot_evict(struct blk_crypto_profile *profile,
 				       const struct blk_crypto_key *key,
 				       unsigned int slot)
 {
-	struct ufs_hba *hba =
-		container_of(profile, struct ufs_hba, crypto_profile);
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
 	/*
 	 * Clear the crypto cfg on the device. Clearing CFGE
 	 * might not be sufficient, so just clear the entire cfg.
 	 */
 	union ufs_crypto_cfg_entry cfg = {};
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index 4ff117fd27cd..55b81996b6e1 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -1211,10 +1211,18 @@ static inline size_t ufshcd_sg_entry_size(const struct ufs_hba *hba)
 
 #define ufshcd_set_sg_entry_size(hba, sg_entry_size)                   \
 	({ (void)(hba); BUILD_BUG_ON(sg_entry_size != sizeof(struct ufshcd_sg_entry)); })
 #endif
 
+#ifdef CONFIG_SCSI_UFS_CRYPTO
+static inline struct ufs_hba *
+ufs_hba_from_crypto_profile(struct blk_crypto_profile *profile)
+{
+	return container_of(profile, struct ufs_hba, crypto_profile);
+}
+#endif
+
 static inline size_t ufshcd_get_ucd_size(const struct ufs_hba *hba)
 {
 	return sizeof(struct utp_transfer_cmd_desc) + SG_ALL * ufshcd_sg_entry_size(hba);
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 03/15] ufs: qcom: convert to use UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 01/15] ufs: qcom: fix crypto key eviction Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 02/15] ufs: crypto: add ufs_hba_from_crypto_profile() Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 04/15] ufs: crypto: remove ufs_hba_variant_ops::program_key Eric Biggers
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

By default the UFS core is responsible for initializing the
blk_crypto_profile, but Qualcomm platforms have their own way of
programming and evicting crypto keys.  So currently
ufs_hba_variant_ops::program_key is used to redirect control flow from
ufshcd_program_key().  This has worked until now, but it's a bit of a
hack, given that the key (and algorithm ID etc.) ends up being converted
from blk_crypto_key => ufs_crypto_cfg_entry => SCM call parameters,
where the intermediate ufs_crypto_cfg_entry step is unnecessary.  Taking
a similar approach with the upcoming wrapped key support, the
implementation of which is similarly platform-specific, would require
adding four new methods to ufs_hba_variant_ops, changing program_key to
take the struct blk_crypto_key, and adding a new UFSHCD_CAP_* flag to
indicate support for wrapped keys.

This patch takes a different approach.  It changes ufs-qcom to use the
existing UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE which was recently added for
ufs-exynos.  This allows it to override the full blk_crypto_profile,
eliminating the need for the existing ufs_hba_variant_ops::program_key
and the hooks that would have been needed for wrapped key support.  It
does require a bit of duplicated code to read the crypto capability
registers, but it's worth the simplification in design with ufs-qcom and
ufs-exynos now using the same method to customize the crypto profile,
and it makes it much easier to add wrapped key support.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/ufs/host/ufs-qcom.c | 91 +++++++++++++++++++++++++++++--------
 1 file changed, 72 insertions(+), 19 deletions(-)

diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index e33ae71245dd..de37d5933ca9 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -110,15 +110,22 @@ static inline void ufs_qcom_ice_enable(struct ufs_qcom_host *host)
 {
 	if (host->hba->caps & UFSHCD_CAP_CRYPTO)
 		qcom_ice_enable(host->ice);
 }
 
+static const struct blk_crypto_ll_ops ufs_qcom_crypto_ops; /* forward decl */
+
 static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
 {
 	struct ufs_hba *hba = host->hba;
+	struct blk_crypto_profile *profile = &hba->crypto_profile;
 	struct device *dev = hba->dev;
 	struct qcom_ice *ice;
+	union ufs_crypto_capabilities caps;
+	union ufs_crypto_cap_entry cap;
+	int err;
+	int i;
 
 	ice = of_qcom_ice_get(dev);
 	if (ice == ERR_PTR(-EOPNOTSUPP)) {
 		dev_warn(dev, "Disabling inline encryption support\n");
 		ice = NULL;
@@ -126,12 +133,42 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
 
 	if (IS_ERR_OR_NULL(ice))
 		return PTR_ERR_OR_ZERO(ice);
 
 	host->ice = ice;
-	hba->caps |= UFSHCD_CAP_CRYPTO;
 
+	/* Initialize the blk_crypto_profile */
+
+	caps.reg_val = cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
+
+	/* The number of keyslots supported is (CFGC+1) */
+	err = devm_blk_crypto_profile_init(dev, profile, caps.config_count + 1);
+	if (err)
+		return err;
+
+	profile->ll_ops = ufs_qcom_crypto_ops;
+	profile->max_dun_bytes_supported = 8;
+	profile->dev = dev;
+
+	/*
+	 * Currently this driver only supports AES-256-XTS.  All known versions
+	 * of ICE support it, but to be safe make sure it is really declared in
+	 * the crypto capability registers.  The crypto capability registers
+	 * also give the supported data unit size(s).
+	 */
+	for (i = 0; i < caps.num_crypto_cap; i++) {
+		cap.reg_val = cpu_to_le32(ufshcd_readl(hba,
+						       REG_UFS_CRYPTOCAP +
+						       i * sizeof(__le32)));
+		if (cap.algorithm_id == UFS_CRYPTO_ALG_AES_XTS &&
+		    cap.key_size == UFS_CRYPTO_KEY_SIZE_256)
+			profile->modes_supported[BLK_ENCRYPTION_MODE_AES_256_XTS] |=
+				cap.sdus_mask * 512;
+	}
+
+	hba->caps |= UFSHCD_CAP_CRYPTO;
+	hba->quirks |= UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE;
 	return 0;
 }
 
 static inline int ufs_qcom_ice_resume(struct ufs_qcom_host *host)
 {
@@ -147,36 +184,53 @@ static inline int ufs_qcom_ice_suspend(struct ufs_qcom_host *host)
 		return qcom_ice_suspend(host->ice);
 
 	return 0;
 }
 
-static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
-				    const union ufs_crypto_cfg_entry *cfg,
-				    int slot)
+static int ufs_qcom_ice_keyslot_program(struct blk_crypto_profile *profile,
+					const struct blk_crypto_key *key,
+					unsigned int slot)
 {
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
-	union ufs_crypto_cap_entry cap;
-
-	if (!(cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE))
-		return qcom_ice_evict_key(host->ice, slot);
+	int err;
 
 	/* Only AES-256-XTS has been tested so far. */
-	cap = hba->crypto_cap_array[cfg->crypto_cap_idx];
-	if (cap.algorithm_id != UFS_CRYPTO_ALG_AES_XTS ||
-	    cap.key_size != UFS_CRYPTO_KEY_SIZE_256)
+	if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
 		return -EOPNOTSUPP;
 
-	return qcom_ice_program_key(host->ice,
-				    QCOM_ICE_CRYPTO_ALG_AES_XTS,
-				    QCOM_ICE_CRYPTO_KEY_SIZE_256,
-				    cfg->crypto_key,
-				    cfg->data_unit_size, slot);
+	ufshcd_hold(hba);
+	err = qcom_ice_program_key(host->ice,
+				   QCOM_ICE_CRYPTO_ALG_AES_XTS,
+				   QCOM_ICE_CRYPTO_KEY_SIZE_256,
+				   key->raw,
+				   key->crypto_cfg.data_unit_size / 512,
+				   slot);
+	ufshcd_release(hba);
+	return err;
 }
 
-#else
+static int ufs_qcom_ice_keyslot_evict(struct blk_crypto_profile *profile,
+				      const struct blk_crypto_key *key,
+				      unsigned int slot)
+{
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
+	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+	int err;
+
+	ufshcd_hold(hba);
+	err = qcom_ice_evict_key(host->ice, slot);
+	ufshcd_release(hba);
+	return err;
+}
 
-#define ufs_qcom_ice_program_key NULL
+static const struct blk_crypto_ll_ops ufs_qcom_crypto_ops = {
+	.keyslot_program	= ufs_qcom_ice_keyslot_program,
+	.keyslot_evict		= ufs_qcom_ice_keyslot_evict,
+};
+
+#else
 
 static inline void ufs_qcom_ice_enable(struct ufs_qcom_host *host)
 {
 }
 
@@ -1820,11 +1874,10 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
 	.suspend		= ufs_qcom_suspend,
 	.resume			= ufs_qcom_resume,
 	.dbg_register_dump	= ufs_qcom_dump_dbg_regs,
 	.device_reset		= ufs_qcom_device_reset,
 	.config_scaling_param = ufs_qcom_config_scaling_param,
-	.program_key		= ufs_qcom_ice_program_key,
 	.reinit_notify		= ufs_qcom_reinit_notify,
 	.mcq_config_resource	= ufs_qcom_mcq_config_resource,
 	.get_hba_mac		= ufs_qcom_get_hba_mac,
 	.op_runtime_config	= ufs_qcom_op_runtime_config,
 	.get_outstanding_cqs	= ufs_qcom_get_outstanding_cqs,
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 04/15] ufs: crypto: remove ufs_hba_variant_ops::program_key
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (2 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 03/15] ufs: qcom: convert to use UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 05/15] mmc: sdhci-msm: fix crypto key eviction Eric Biggers
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson

From: Eric Biggers <ebiggers@google.com>

There are no longer any implementations of
ufs_hba_variant_ops::program_key, so remove it.

As a result, ufshcd_program_key() no longer can return an error, so also
clean it up to return void.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/ufs/core/ufshcd-crypto.c | 20 ++++++--------------
 include/ufs/ufshcd.h             |  3 ---
 2 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/drivers/ufs/core/ufshcd-crypto.c b/drivers/ufs/core/ufshcd-crypto.c
index 0cb425ef618e..694ff7578fc1 100644
--- a/drivers/ufs/core/ufshcd-crypto.c
+++ b/drivers/ufs/core/ufshcd-crypto.c
@@ -15,24 +15,18 @@ static const struct ufs_crypto_alg_entry {
 		.ufs_alg = UFS_CRYPTO_ALG_AES_XTS,
 		.ufs_key_size = UFS_CRYPTO_KEY_SIZE_256,
 	},
 };
 
-static int ufshcd_program_key(struct ufs_hba *hba,
-			      const union ufs_crypto_cfg_entry *cfg, int slot)
+static void ufshcd_program_key(struct ufs_hba *hba,
+			       const union ufs_crypto_cfg_entry *cfg, int slot)
 {
 	int i;
 	u32 slot_offset = hba->crypto_cfg_register + slot * sizeof(*cfg);
-	int err = 0;
 
 	ufshcd_hold(hba);
 
-	if (hba->vops && hba->vops->program_key) {
-		err = hba->vops->program_key(hba, cfg, slot);
-		goto out;
-	}
-
 	/* Ensure that CFGE is cleared before programming the key */
 	ufshcd_writel(hba, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
 	for (i = 0; i < 16; i++) {
 		ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[i]),
 			      slot_offset + i * sizeof(cfg->reg_val[0]));
@@ -41,13 +35,11 @@ static int ufshcd_program_key(struct ufs_hba *hba,
 	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[17]),
 		      slot_offset + 17 * sizeof(cfg->reg_val[0]));
 	/* Dword 16 must be written last */
 	ufshcd_writel(hba, le32_to_cpu(cfg->reg_val[16]),
 		      slot_offset + 16 * sizeof(cfg->reg_val[0]));
-out:
 	ufshcd_release(hba);
-	return err;
 }
 
 static int ufshcd_crypto_keyslot_program(struct blk_crypto_profile *profile,
 					 const struct blk_crypto_key *key,
 					 unsigned int slot)
@@ -58,11 +50,10 @@ static int ufshcd_crypto_keyslot_program(struct blk_crypto_profile *profile,
 			&ufs_crypto_algs[key->crypto_cfg.crypto_mode];
 	u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512;
 	int i;
 	int cap_idx = -1;
 	union ufs_crypto_cfg_entry cfg = {};
-	int err;
 
 	BUILD_BUG_ON(UFS_CRYPTO_KEY_SIZE_INVALID != 0);
 	for (i = 0; i < hba->crypto_capabilities.num_crypto_cap; i++) {
 		if (ccap_array[i].algorithm_id == alg->ufs_alg &&
 		    ccap_array[i].key_size == alg->ufs_key_size &&
@@ -86,14 +77,14 @@ static int ufshcd_crypto_keyslot_program(struct blk_crypto_profile *profile,
 		       key->raw + key->size/2, key->size/2);
 	} else {
 		memcpy(cfg.crypto_key, key->raw, key->size);
 	}
 
-	err = ufshcd_program_key(hba, &cfg, slot);
+	ufshcd_program_key(hba, &cfg, slot);
 
 	memzero_explicit(&cfg, sizeof(cfg));
-	return err;
+	return 0;
 }
 
 static int ufshcd_crypto_keyslot_evict(struct blk_crypto_profile *profile,
 				       const struct blk_crypto_key *key,
 				       unsigned int slot)
@@ -103,11 +94,12 @@ static int ufshcd_crypto_keyslot_evict(struct blk_crypto_profile *profile,
 	 * Clear the crypto cfg on the device. Clearing CFGE
 	 * might not be sufficient, so just clear the entire cfg.
 	 */
 	union ufs_crypto_cfg_entry cfg = {};
 
-	return ufshcd_program_key(hba, &cfg, slot);
+	ufshcd_program_key(hba, &cfg, slot);
+	return 0;
 }
 
 /*
  * Reprogram the keyslots if needed, and return true if CRYPTO_GENERAL_ENABLE
  * should be used in the host controller initialization sequence.
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index 55b81996b6e1..2606561053f4 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -324,11 +324,10 @@ struct ufs_pwr_mode_info {
  * @resume: called during host controller PM callback
  * @dbg_register_dump: used to dump controller debug information
  * @phy_initialization: used to initialize phys
  * @device_reset: called to issue a reset pulse on the UFS device
  * @config_scaling_param: called to configure clock scaling parameters
- * @program_key: program or evict an inline encryption key
  * @fill_crypto_prdt: initialize crypto-related fields in the PRDT
  * @event_notify: called to notify important events
  * @reinit_notify: called to notify reinit of UFSHCD during max gear switch
  * @mcq_config_resource: called to configure MCQ platform resources
  * @get_hba_mac: reports maximum number of outstanding commands supported by
@@ -372,12 +371,10 @@ struct ufs_hba_variant_ops {
 	int	(*phy_initialization)(struct ufs_hba *);
 	int	(*device_reset)(struct ufs_hba *hba);
 	void	(*config_scaling_param)(struct ufs_hba *hba,
 				struct devfreq_dev_profile *profile,
 				struct devfreq_simple_ondemand_data *data);
-	int	(*program_key)(struct ufs_hba *hba,
-			       const union ufs_crypto_cfg_entry *cfg, int slot);
 	int	(*fill_crypto_prdt)(struct ufs_hba *hba,
 				    const struct bio_crypt_ctx *crypt_ctx,
 				    void *prdt, unsigned int num_segments);
 	void	(*event_notify)(struct ufs_hba *hba,
 				enum ufs_event_type evt, void *data);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 05/15] mmc: sdhci-msm: fix crypto key eviction
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (3 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 04/15] ufs: crypto: remove ufs_hba_variant_ops::program_key Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-19 13:48   ` Ulf Hansson
  2024-12-13  4:19 ` [PATCH v10 06/15] mmc: crypto: add mmc_from_crypto_profile() Eric Biggers
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, stable, Abel Vesa

From: Eric Biggers <ebiggers@google.com>

Commit c7eed31e235c ("mmc: sdhci-msm: Switch to the new ICE API")
introduced an incorrect check of the algorithm ID into the key eviction
path, and thus qcom_ice_evict_key() is no longer ever called.  Fix it.

Fixes: c7eed31e235c ("mmc: sdhci-msm: Switch to the new ICE API")
Cc: stable@vger.kernel.org
Cc: Abel Vesa <abel.vesa@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/mmc/host/sdhci-msm.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index e00208535bd1..319f0ebbe652 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -1865,24 +1865,24 @@ static int sdhci_msm_program_key(struct cqhci_host *cq_host,
 	struct sdhci_host *host = mmc_priv(cq_host->mmc);
 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
 	struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
 	union cqhci_crypto_cap_entry cap;
 
+	if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
+		return qcom_ice_evict_key(msm_host->ice, slot);
+
 	/* Only AES-256-XTS has been tested so far. */
 	cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
 	if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
 		cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
 		return -EINVAL;
 
-	if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)
-		return qcom_ice_program_key(msm_host->ice,
-					    QCOM_ICE_CRYPTO_ALG_AES_XTS,
-					    QCOM_ICE_CRYPTO_KEY_SIZE_256,
-					    cfg->crypto_key,
-					    cfg->data_unit_size, slot);
-	else
-		return qcom_ice_evict_key(msm_host->ice, slot);
+	return qcom_ice_program_key(msm_host->ice,
+				    QCOM_ICE_CRYPTO_ALG_AES_XTS,
+				    QCOM_ICE_CRYPTO_KEY_SIZE_256,
+				    cfg->crypto_key,
+				    cfg->data_unit_size, slot);
 }
 
 #else /* CONFIG_MMC_CRYPTO */
 
 static inline int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 06/15] mmc: crypto: add mmc_from_crypto_profile()
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (4 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 05/15] mmc: sdhci-msm: fix crypto key eviction Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-19 13:48   ` Ulf Hansson
  2024-12-13  4:19 ` [PATCH v10 07/15] mmc: sdhci-msm: convert to use custom crypto profile Eric Biggers
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson

From: Eric Biggers <ebiggers@google.com>

Add a helper function that encapsulates a container_of expression.  For
now there is just one user but soon there will be more.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/mmc/host/cqhci-crypto.c | 5 +----
 include/linux/mmc/host.h        | 8 ++++++++
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
index d5f4b6972f63..2951911d3f78 100644
--- a/drivers/mmc/host/cqhci-crypto.c
+++ b/drivers/mmc/host/cqhci-crypto.c
@@ -23,14 +23,11 @@ static const struct cqhci_crypto_alg_entry {
 };
 
 static inline struct cqhci_host *
 cqhci_host_from_crypto_profile(struct blk_crypto_profile *profile)
 {
-	struct mmc_host *mmc =
-		container_of(profile, struct mmc_host, crypto_profile);
-
-	return mmc->cqe_private;
+	return mmc_from_crypto_profile(profile)->cqe_private;
 }
 
 static int cqhci_crypto_program_key(struct cqhci_host *cq_host,
 				    const union cqhci_crypto_cfg_entry *cfg,
 				    int slot)
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index f166d6611ddb..68f09a955a90 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -588,10 +588,18 @@ static inline void *mmc_priv(struct mmc_host *host)
 static inline struct mmc_host *mmc_from_priv(void *priv)
 {
 	return container_of(priv, struct mmc_host, private);
 }
 
+#ifdef CONFIG_MMC_CRYPTO
+static inline struct mmc_host *
+mmc_from_crypto_profile(struct blk_crypto_profile *profile)
+{
+	return container_of(profile, struct mmc_host, crypto_profile);
+}
+#endif
+
 #define mmc_host_is_spi(host)	((host)->caps & MMC_CAP_SPI)
 
 #define mmc_dev(x)	((x)->parent)
 #define mmc_classdev(x)	(&(x)->class_dev)
 #define mmc_hostname(x)	(dev_name(&(x)->class_dev))
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 07/15] mmc: sdhci-msm: convert to use custom crypto profile
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (5 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 06/15] mmc: crypto: add mmc_from_crypto_profile() Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-19 13:48   ` Ulf Hansson
  2024-12-13  4:19 ` [PATCH v10 08/15] firmware: qcom: scm: add calls for wrapped key support Eric Biggers
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson

From: Eric Biggers <ebiggers@google.com>

As is being done in ufs-qcom, make the sdhci-msm driver override the
full crypto profile rather than "just" key programming and eviction.
This makes it much more straightforward to add support for
hardware-wrapped inline encryption keys.  It also makes it easy to pass
the original blk_crypto_key down to qcom_ice_program_key() once it is
updated to require the key in that form.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/mmc/host/cqhci-crypto.c | 33 ++++++------
 drivers/mmc/host/cqhci.h        |  8 ++-
 drivers/mmc/host/sdhci-msm.c    | 94 ++++++++++++++++++++++++++-------
 3 files changed, 94 insertions(+), 41 deletions(-)

diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
index 2951911d3f78..cb8044093402 100644
--- a/drivers/mmc/host/cqhci-crypto.c
+++ b/drivers/mmc/host/cqhci-crypto.c
@@ -26,20 +26,17 @@ static inline struct cqhci_host *
 cqhci_host_from_crypto_profile(struct blk_crypto_profile *profile)
 {
 	return mmc_from_crypto_profile(profile)->cqe_private;
 }
 
-static int cqhci_crypto_program_key(struct cqhci_host *cq_host,
-				    const union cqhci_crypto_cfg_entry *cfg,
-				    int slot)
+static void cqhci_crypto_program_key(struct cqhci_host *cq_host,
+				     const union cqhci_crypto_cfg_entry *cfg,
+				     int slot)
 {
 	u32 slot_offset = cq_host->crypto_cfg_register + slot * sizeof(*cfg);
 	int i;
 
-	if (cq_host->ops->program_key)
-		return cq_host->ops->program_key(cq_host, cfg, slot);
-
 	/* Clear CFGE */
 	cqhci_writel(cq_host, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
 
 	/* Write the key */
 	for (i = 0; i < 16; i++) {
@@ -50,11 +47,10 @@ static int cqhci_crypto_program_key(struct cqhci_host *cq_host,
 	cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[17]),
 		     slot_offset + 17 * sizeof(cfg->reg_val[0]));
 	/* Write dword 16, which includes the new value of CFGE */
 	cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[16]),
 		     slot_offset + 16 * sizeof(cfg->reg_val[0]));
-	return 0;
 }
 
 static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
 					const struct blk_crypto_key *key,
 					unsigned int slot)
@@ -67,11 +63,10 @@ static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
 			&cqhci_crypto_algs[key->crypto_cfg.crypto_mode];
 	u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512;
 	int i;
 	int cap_idx = -1;
 	union cqhci_crypto_cfg_entry cfg = {};
-	int err;
 
 	BUILD_BUG_ON(CQHCI_CRYPTO_KEY_SIZE_INVALID != 0);
 	for (i = 0; i < cq_host->crypto_capabilities.num_crypto_cap; i++) {
 		if (ccap_array[i].algorithm_id == alg->alg &&
 		    ccap_array[i].key_size == alg->key_size &&
@@ -94,25 +89,26 @@ static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
 		       key->raw + key->size/2, key->size/2);
 	} else {
 		memcpy(cfg.crypto_key, key->raw, key->size);
 	}
 
-	err = cqhci_crypto_program_key(cq_host, &cfg, slot);
+	cqhci_crypto_program_key(cq_host, &cfg, slot);
 
 	memzero_explicit(&cfg, sizeof(cfg));
-	return err;
+	return 0;
 }
 
 static int cqhci_crypto_clear_keyslot(struct cqhci_host *cq_host, int slot)
 {
 	/*
 	 * Clear the crypto cfg on the device. Clearing CFGE
 	 * might not be sufficient, so just clear the entire cfg.
 	 */
 	union cqhci_crypto_cfg_entry cfg = {};
 
-	return cqhci_crypto_program_key(cq_host, &cfg, slot);
+	cqhci_crypto_program_key(cq_host, &cfg, slot);
+	return 0;
 }
 
 static int cqhci_crypto_keyslot_evict(struct blk_crypto_profile *profile,
 				      const struct blk_crypto_key *key,
 				      unsigned int slot)
@@ -165,20 +161,22 @@ cqhci_find_blk_crypto_mode(union cqhci_crypto_cap_entry cap)
 int cqhci_crypto_init(struct cqhci_host *cq_host)
 {
 	struct mmc_host *mmc = cq_host->mmc;
 	struct device *dev = mmc_dev(mmc);
 	struct blk_crypto_profile *profile = &mmc->crypto_profile;
-	unsigned int num_keyslots;
 	unsigned int cap_idx;
 	enum blk_crypto_mode_num blk_mode_num;
 	unsigned int slot;
 	int err = 0;
 
 	if (!(mmc->caps2 & MMC_CAP2_CRYPTO) ||
 	    !(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS))
 		goto out;
 
+	if (cq_host->ops->uses_custom_crypto_profile)
+		goto profile_initialized;
+
 	cq_host->crypto_capabilities.reg_val =
 			cpu_to_le32(cqhci_readl(cq_host, CQHCI_CCAP));
 
 	cq_host->crypto_cfg_register =
 		(u32)cq_host->crypto_capabilities.config_array_ptr * 0x100;
@@ -193,13 +191,12 @@ int cqhci_crypto_init(struct cqhci_host *cq_host)
 
 	/*
 	 * CCAP.CFGC is off by one, so the actual number of crypto
 	 * configurations (a.k.a. keyslots) is CCAP.CFGC + 1.
 	 */
-	num_keyslots = cq_host->crypto_capabilities.config_count + 1;
-
-	err = devm_blk_crypto_profile_init(dev, profile, num_keyslots);
+	err = devm_blk_crypto_profile_init(
+		dev, profile, cq_host->crypto_capabilities.config_count + 1);
 	if (err)
 		goto out;
 
 	profile->ll_ops = cqhci_crypto_ops;
 	profile->dev = dev;
@@ -223,13 +220,15 @@ int cqhci_crypto_init(struct cqhci_host *cq_host)
 			continue;
 		profile->modes_supported[blk_mode_num] |=
 			cq_host->crypto_cap_array[cap_idx].sdus_mask * 512;
 	}
 
+profile_initialized:
+
 	/* Clear all the keyslots so that we start in a known state. */
-	for (slot = 0; slot < num_keyslots; slot++)
-		cqhci_crypto_clear_keyslot(cq_host, slot);
+	for (slot = 0; slot < profile->num_slots; slot++)
+		profile->ll_ops.keyslot_evict(profile, NULL, slot);
 
 	/* CQHCI crypto requires the use of 128-bit task descriptors. */
 	cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
 
 	return 0;
diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h
index fab9d74445ba..ce189a1866b9 100644
--- a/drivers/mmc/host/cqhci.h
+++ b/drivers/mmc/host/cqhci.h
@@ -287,17 +287,15 @@ struct cqhci_host_ops {
 	void (*disable)(struct mmc_host *mmc, bool recovery);
 	void (*update_dcmd_desc)(struct mmc_host *mmc, struct mmc_request *mrq,
 				 u64 *data);
 	void (*pre_enable)(struct mmc_host *mmc);
 	void (*post_disable)(struct mmc_host *mmc);
-#ifdef CONFIG_MMC_CRYPTO
-	int (*program_key)(struct cqhci_host *cq_host,
-			   const union cqhci_crypto_cfg_entry *cfg, int slot);
-#endif
 	void (*set_tran_desc)(struct cqhci_host *cq_host, u8 **desc,
 			      dma_addr_t addr, int len, bool end, bool dma64);
-
+#ifdef CONFIG_MMC_CRYPTO
+	bool uses_custom_crypto_profile;
+#endif
 };
 
 static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
 {
 	if (unlikely(host->ops->write_l))
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 319f0ebbe652..4610f067faca 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -1805,16 +1805,23 @@ static void sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock)
  *                                                                           *
 \*****************************************************************************/
 
 #ifdef CONFIG_MMC_CRYPTO
 
+static const struct blk_crypto_ll_ops sdhci_msm_crypto_ops; /* forward decl */
+
 static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
 			      struct cqhci_host *cq_host)
 {
 	struct mmc_host *mmc = msm_host->mmc;
+	struct blk_crypto_profile *profile = &mmc->crypto_profile;
 	struct device *dev = mmc_dev(mmc);
 	struct qcom_ice *ice;
+	union cqhci_crypto_capabilities caps;
+	union cqhci_crypto_cap_entry cap;
+	int err;
+	int i;
 
 	if (!(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS))
 		return 0;
 
 	ice = of_qcom_ice_get(dev);
@@ -1825,12 +1832,41 @@ static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
 
 	if (IS_ERR_OR_NULL(ice))
 		return PTR_ERR_OR_ZERO(ice);
 
 	msm_host->ice = ice;
-	mmc->caps2 |= MMC_CAP2_CRYPTO;
 
+	/* Initialize the blk_crypto_profile */
+
+	caps.reg_val = cpu_to_le32(cqhci_readl(cq_host, CQHCI_CCAP));
+
+	/* The number of keyslots supported is (CFGC+1) */
+	err = devm_blk_crypto_profile_init(dev, profile, caps.config_count + 1);
+	if (err)
+		return err;
+
+	profile->ll_ops = sdhci_msm_crypto_ops;
+	profile->max_dun_bytes_supported = 4;
+	profile->dev = dev;
+
+	/*
+	 * Currently this driver only supports AES-256-XTS.  All known versions
+	 * of ICE support it, but to be safe make sure it is really declared in
+	 * the crypto capability registers.  The crypto capability registers
+	 * also give the supported data unit size(s).
+	 */
+	for (i = 0; i < caps.num_crypto_cap; i++) {
+		cap.reg_val = cpu_to_le32(cqhci_readl(cq_host,
+						      CQHCI_CRYPTOCAP +
+						      i * sizeof(__le32)));
+		if (cap.algorithm_id == CQHCI_CRYPTO_ALG_AES_XTS &&
+		    cap.key_size == CQHCI_CRYPTO_KEY_SIZE_256)
+			profile->modes_supported[BLK_ENCRYPTION_MODE_AES_256_XTS] |=
+				cap.sdus_mask * 512;
+	}
+
+	mmc->caps2 |= MMC_CAP2_CRYPTO;
 	return 0;
 }
 
 static void sdhci_msm_ice_enable(struct sdhci_msm_host *msm_host)
 {
@@ -1852,39 +1888,59 @@ static __maybe_unused int sdhci_msm_ice_suspend(struct sdhci_msm_host *msm_host)
 		return qcom_ice_suspend(msm_host->ice);
 
 	return 0;
 }
 
-/*
- * Program a key into a QC ICE keyslot, or evict a keyslot.  QC ICE requires
- * vendor-specific SCM calls for this; it doesn't support the standard way.
- */
-static int sdhci_msm_program_key(struct cqhci_host *cq_host,
-				 const union cqhci_crypto_cfg_entry *cfg,
-				 int slot)
+static inline struct sdhci_msm_host *
+sdhci_msm_host_from_crypto_profile(struct blk_crypto_profile *profile)
 {
-	struct sdhci_host *host = mmc_priv(cq_host->mmc);
+	struct mmc_host *mmc = mmc_from_crypto_profile(profile);
+	struct sdhci_host *host = mmc_priv(mmc);
 	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
 	struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
-	union cqhci_crypto_cap_entry cap;
 
-	if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
-		return qcom_ice_evict_key(msm_host->ice, slot);
+	return msm_host;
+}
+
+/*
+ * Program a key into a QC ICE keyslot.  QC ICE requires a QC-specific SCM call
+ * for this; it doesn't support the standard way.
+ */
+static int sdhci_msm_ice_keyslot_program(struct blk_crypto_profile *profile,
+					 const struct blk_crypto_key *key,
+					 unsigned int slot)
+{
+	struct sdhci_msm_host *msm_host =
+		sdhci_msm_host_from_crypto_profile(profile);
 
 	/* Only AES-256-XTS has been tested so far. */
-	cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
-	if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
-		cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
-		return -EINVAL;
+	if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
+		return -EOPNOTSUPP;
 
 	return qcom_ice_program_key(msm_host->ice,
 				    QCOM_ICE_CRYPTO_ALG_AES_XTS,
 				    QCOM_ICE_CRYPTO_KEY_SIZE_256,
-				    cfg->crypto_key,
-				    cfg->data_unit_size, slot);
+				    key->raw,
+				    key->crypto_cfg.data_unit_size / 512,
+				    slot);
 }
 
+static int sdhci_msm_ice_keyslot_evict(struct blk_crypto_profile *profile,
+				       const struct blk_crypto_key *key,
+				       unsigned int slot)
+{
+	struct sdhci_msm_host *msm_host =
+		sdhci_msm_host_from_crypto_profile(profile);
+
+	return qcom_ice_evict_key(msm_host->ice, slot);
+}
+
+static const struct blk_crypto_ll_ops sdhci_msm_crypto_ops = {
+	.keyslot_program	= sdhci_msm_ice_keyslot_program,
+	.keyslot_evict		= sdhci_msm_ice_keyslot_evict,
+};
+
 #else /* CONFIG_MMC_CRYPTO */
 
 static inline int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
 				     struct cqhci_host *cq_host)
 {
@@ -1986,11 +2042,11 @@ static void sdhci_msm_set_timeout(struct sdhci_host *host, struct mmc_command *c
 
 static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
 	.enable		= sdhci_msm_cqe_enable,
 	.disable	= sdhci_msm_cqe_disable,
 #ifdef CONFIG_MMC_CRYPTO
-	.program_key	= sdhci_msm_program_key,
+	.uses_custom_crypto_profile = true,
 #endif
 };
 
 static int sdhci_msm_cqe_add_host(struct sdhci_host *host,
 				struct platform_device *pdev)
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 08/15] firmware: qcom: scm: add calls for wrapped key support
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (6 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 07/15] mmc: sdhci-msm: convert to use custom crypto profile Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 09/15] soc: qcom: ice: make qcom_ice_program_key() take struct blk_crypto_key Eric Biggers
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Gaurav Kashyap <quic_gaurkash@quicinc.com>

Add helper functions for the SCM calls required to support
hardware-wrapped inline storage encryption keys.  These SCM calls manage
wrapped keys via Qualcomm's Hardware Key Manager (HWKM), which can only
be accessed from TrustZone.

QCOM_SCM_ES_GENERATE_ICE_KEY and QCOM_SCM_ES_IMPORT_ICE_KEY create a new
long-term wrapped key, with the former making the hardware generate the
key and the latter importing a raw key.  QCOM_SCM_ES_PREPARE_ICE_KEY
converts the key to ephemerally-wrapped form so that it can be used for
inline storage encryption.  These are planned to be wired up to new
ioctls via the blk-crypto framework; see the proposed documentation for
the hardware-wrapped keys feature for more information.

Similarly there's also QCOM_SCM_ES_DERIVE_SW_SECRET which derives a
"software secret" from an ephemerally-wrapped key and will be wired up
to the corresponding operation in the blk_crypto_profile.

These will all be used by the ICE driver in drivers/soc/qcom/ice.c.

Signed-off-by: Gaurav Kashyap <quic_gaurkash@quicinc.com>
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
[EB: merged related patches, fixed error handling, fixed naming, fixed
     docs for size parameters, fixed qcom_scm_has_wrapped_key_support(),
     improved comments, improved commit message.]
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/firmware/qcom/qcom_scm.c       | 214 +++++++++++++++++++++++++
 drivers/firmware/qcom/qcom_scm.h       |   4 +
 include/linux/firmware/qcom/qcom_scm.h |   8 +
 3 files changed, 226 insertions(+)

diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
index 72bf87ddcd96..36f3ddcb9020 100644
--- a/drivers/firmware/qcom/qcom_scm.c
+++ b/drivers/firmware/qcom/qcom_scm.c
@@ -1277,10 +1277,224 @@ int qcom_scm_ice_set_key(u32 index, const u8 *key, u32 key_size,
 
 	return ret;
 }
 EXPORT_SYMBOL_GPL(qcom_scm_ice_set_key);
 
+bool qcom_scm_has_wrapped_key_support(void)
+{
+	return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_ES,
+					    QCOM_SCM_ES_DERIVE_SW_SECRET) &&
+	       __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_ES,
+					    QCOM_SCM_ES_GENERATE_ICE_KEY) &&
+	       __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_ES,
+					    QCOM_SCM_ES_PREPARE_ICE_KEY) &&
+	       __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_ES,
+					    QCOM_SCM_ES_IMPORT_ICE_KEY);
+}
+EXPORT_SYMBOL_GPL(qcom_scm_has_wrapped_key_support);
+
+/**
+ * qcom_scm_derive_sw_secret() - Derive software secret from wrapped key
+ * @eph_key: an ephemerally-wrapped key
+ * @eph_key_size: size of @eph_key in bytes
+ * @sw_secret: output buffer for the software secret
+ * @sw_secret_size: size of the software secret to derive in bytes
+ *
+ * Derive a software secret from an ephemerally-wrapped key for software crypto
+ * operations.  This is done by calling into the secure execution environment,
+ * which then calls into the hardware to unwrap and derive the secret.
+ *
+ * For more information on sw_secret, see the "Hardware-wrapped keys" section of
+ * Documentation/block/inline-encryption.rst.
+ *
+ * Return: 0 on success; -errno on failure.
+ */
+int qcom_scm_derive_sw_secret(const u8 *eph_key, size_t eph_key_size,
+			      u8 *sw_secret, size_t sw_secret_size)
+{
+	struct qcom_scm_desc desc = {
+		.svc = QCOM_SCM_SVC_ES,
+		.cmd = QCOM_SCM_ES_DERIVE_SW_SECRET,
+		.arginfo = QCOM_SCM_ARGS(4, QCOM_SCM_RW, QCOM_SCM_VAL,
+					 QCOM_SCM_RW, QCOM_SCM_VAL),
+		.owner = ARM_SMCCC_OWNER_SIP,
+	};
+	int ret;
+
+	void *eph_key_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+								eph_key_size,
+								GFP_KERNEL);
+	if (!eph_key_buf)
+		return -ENOMEM;
+
+	void *sw_secret_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+								  sw_secret_size,
+								  GFP_KERNEL);
+	if (!sw_secret_buf)
+		return -ENOMEM;
+
+	memcpy(eph_key_buf, eph_key, eph_key_size);
+	desc.args[0] = qcom_tzmem_to_phys(eph_key_buf);
+	desc.args[1] = eph_key_size;
+	desc.args[2] = qcom_tzmem_to_phys(sw_secret_buf);
+	desc.args[3] = sw_secret_size;
+
+	ret = qcom_scm_call(__scm->dev, &desc, NULL);
+	if (!ret)
+		memcpy(sw_secret, sw_secret_buf, sw_secret_size);
+
+	memzero_explicit(eph_key_buf, eph_key_size);
+	memzero_explicit(sw_secret_buf, sw_secret_size);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(qcom_scm_derive_sw_secret);
+
+/**
+ * qcom_scm_generate_ice_key() - Generate a wrapped key for storage encryption
+ * @lt_key: output buffer for the long-term wrapped key
+ * @lt_key_size: size of @lt_key in bytes.  Must be the exact wrapped key size
+ *		 used by the SoC.
+ *
+ * Generate a key using the built-in HW module in the SoC.  The resulting key is
+ * returned wrapped with the platform-specific Key Encryption Key.
+ *
+ * Return: 0 on success; -errno on failure.
+ */
+int qcom_scm_generate_ice_key(u8 *lt_key, size_t lt_key_size)
+{
+	struct qcom_scm_desc desc = {
+		.svc = QCOM_SCM_SVC_ES,
+		.cmd =  QCOM_SCM_ES_GENERATE_ICE_KEY,
+		.arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_RW, QCOM_SCM_VAL),
+		.owner = ARM_SMCCC_OWNER_SIP,
+	};
+	int ret;
+
+	void *lt_key_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+							       lt_key_size,
+							       GFP_KERNEL);
+	if (!lt_key_buf)
+		return -ENOMEM;
+
+	desc.args[0] = qcom_tzmem_to_phys(lt_key_buf);
+	desc.args[1] = lt_key_size;
+
+	ret = qcom_scm_call(__scm->dev, &desc, NULL);
+	if (!ret)
+		memcpy(lt_key, lt_key_buf, lt_key_size);
+
+	memzero_explicit(lt_key_buf, lt_key_size);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(qcom_scm_generate_ice_key);
+
+/**
+ * qcom_scm_prepare_ice_key() - Re-wrap a key with the per-boot ephemeral key
+ * @lt_key: a long-term wrapped key
+ * @lt_key_size: size of @lt_key in bytes
+ * @eph_key: output buffer for the ephemerally-wrapped key
+ * @eph_key_size: size of @eph_key in bytes.  Must be the exact wrapped key size
+ *		  used by the SoC.
+ *
+ * Given a long-term wrapped key, re-wrap it with the per-boot ephemeral key for
+ * added protection.  The resulting key will only be valid for the current boot.
+ *
+ * Return: 0 on success; -errno on failure.
+ */
+int qcom_scm_prepare_ice_key(const u8 *lt_key, size_t lt_key_size,
+			     u8 *eph_key, size_t eph_key_size)
+{
+	struct qcom_scm_desc desc = {
+		.svc = QCOM_SCM_SVC_ES,
+		.cmd =  QCOM_SCM_ES_PREPARE_ICE_KEY,
+		.arginfo = QCOM_SCM_ARGS(4, QCOM_SCM_RO, QCOM_SCM_VAL,
+					 QCOM_SCM_RW, QCOM_SCM_VAL),
+		.owner = ARM_SMCCC_OWNER_SIP,
+	};
+	int ret;
+
+	void *lt_key_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+							       lt_key_size,
+							       GFP_KERNEL);
+	if (!lt_key_buf)
+		return -ENOMEM;
+
+	void *eph_key_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+								eph_key_size,
+								GFP_KERNEL);
+	if (!eph_key_buf)
+		return -ENOMEM;
+
+	memcpy(lt_key_buf, lt_key, lt_key_size);
+	desc.args[0] = qcom_tzmem_to_phys(lt_key_buf);
+	desc.args[1] = lt_key_size;
+	desc.args[2] = qcom_tzmem_to_phys(eph_key_buf);
+	desc.args[3] = eph_key_size;
+
+	ret = qcom_scm_call(__scm->dev, &desc, NULL);
+	if (!ret)
+		memcpy(eph_key, eph_key_buf, eph_key_size);
+
+	memzero_explicit(lt_key_buf, lt_key_size);
+	memzero_explicit(eph_key_buf, eph_key_size);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(qcom_scm_prepare_ice_key);
+
+/**
+ * qcom_scm_import_ice_key() - Import key for storage encryption
+ * @raw_key: the raw key to import
+ * @raw_key_size: size of @raw_key in bytes
+ * @lt_key: output buffer for the long-term wrapped key
+ * @lt_key_size: size of @lt_key in bytes.  Must be the exact wrapped key size
+ *		 used by the SoC.
+ *
+ * Import a raw key and return a long-term wrapped key.  Uses the SoC's HWKM to
+ * wrap the raw key using the platform-specific Key Encryption Key.
+ *
+ * Return: 0 on success; -errno on failure.
+ */
+int qcom_scm_import_ice_key(const u8 *raw_key, size_t raw_key_size,
+			    u8 *lt_key, size_t lt_key_size)
+{
+	struct qcom_scm_desc desc = {
+		.svc = QCOM_SCM_SVC_ES,
+		.cmd =  QCOM_SCM_ES_IMPORT_ICE_KEY,
+		.arginfo = QCOM_SCM_ARGS(4, QCOM_SCM_RO, QCOM_SCM_VAL,
+					 QCOM_SCM_RW, QCOM_SCM_VAL),
+		.owner = ARM_SMCCC_OWNER_SIP,
+	};
+	int ret;
+
+	void *raw_key_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+								raw_key_size,
+								GFP_KERNEL);
+	if (!raw_key_buf)
+		return -ENOMEM;
+
+	void *lt_key_buf __free(qcom_tzmem) = qcom_tzmem_alloc(__scm->mempool,
+							       lt_key_size,
+							       GFP_KERNEL);
+	if (!lt_key_buf)
+		return -ENOMEM;
+
+	memcpy(raw_key_buf, raw_key, raw_key_size);
+	desc.args[0] = qcom_tzmem_to_phys(raw_key_buf);
+	desc.args[1] = raw_key_size;
+	desc.args[2] = qcom_tzmem_to_phys(lt_key_buf);
+	desc.args[3] = lt_key_size;
+
+	ret = qcom_scm_call(__scm->dev, &desc, NULL);
+	if (!ret)
+		memcpy(lt_key, lt_key_buf, lt_key_size);
+
+	memzero_explicit(raw_key_buf, raw_key_size);
+	memzero_explicit(lt_key_buf, lt_key_size);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(qcom_scm_import_ice_key);
+
 /**
  * qcom_scm_hdcp_available() - Check if secure environment supports HDCP.
  *
  * Return true if HDCP is supported, false if not.
  */
diff --git a/drivers/firmware/qcom/qcom_scm.h b/drivers/firmware/qcom/qcom_scm.h
index e36b2f67607f..097369d38b84 100644
--- a/drivers/firmware/qcom/qcom_scm.h
+++ b/drivers/firmware/qcom/qcom_scm.h
@@ -126,10 +126,14 @@ struct qcom_tzmem_pool *qcom_scm_get_tzmem_pool(void);
 #define QCOM_SCM_OCMEM_UNLOCK_CMD	0x02
 
 #define QCOM_SCM_SVC_ES			0x10	/* Enterprise Security */
 #define QCOM_SCM_ES_INVALIDATE_ICE_KEY	0x03
 #define QCOM_SCM_ES_CONFIG_SET_ICE_KEY	0x04
+#define QCOM_SCM_ES_DERIVE_SW_SECRET	0x07
+#define QCOM_SCM_ES_GENERATE_ICE_KEY	0x08
+#define QCOM_SCM_ES_PREPARE_ICE_KEY	0x09
+#define QCOM_SCM_ES_IMPORT_ICE_KEY	0x0a
 
 #define QCOM_SCM_SVC_HDCP		0x11
 #define QCOM_SCM_HDCP_INVOKE		0x01
 
 #define QCOM_SCM_SVC_LMH			0x13
diff --git a/include/linux/firmware/qcom/qcom_scm.h b/include/linux/firmware/qcom/qcom_scm.h
index 4621aec0328c..983e1591bbba 100644
--- a/include/linux/firmware/qcom/qcom_scm.h
+++ b/include/linux/firmware/qcom/qcom_scm.h
@@ -103,10 +103,18 @@ int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, u32 size);
 
 bool qcom_scm_ice_available(void);
 int qcom_scm_ice_invalidate_key(u32 index);
 int qcom_scm_ice_set_key(u32 index, const u8 *key, u32 key_size,
 			 enum qcom_scm_ice_cipher cipher, u32 data_unit_size);
+bool qcom_scm_has_wrapped_key_support(void);
+int qcom_scm_derive_sw_secret(const u8 *eph_key, size_t eph_key_size,
+			      u8 *sw_secret, size_t sw_secret_size);
+int qcom_scm_generate_ice_key(u8 *lt_key, size_t lt_key_size);
+int qcom_scm_prepare_ice_key(const u8 *lt_key, size_t lt_key_size,
+			     u8 *eph_key, size_t eph_key_size);
+int qcom_scm_import_ice_key(const u8 *raw_key, size_t raw_key_size,
+			    u8 *lt_key, size_t lt_key_size);
 
 bool qcom_scm_hdcp_available(void);
 int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp);
 
 int qcom_scm_iommu_set_pt_format(u32 sec_id, u32 ctx_num, u32 pt_fmt);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 09/15] soc: qcom: ice: make qcom_ice_program_key() take struct blk_crypto_key
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (7 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 08/15] firmware: qcom: scm: add calls for wrapped key support Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 10/15] blk-crypto: add basic hardware-wrapped key support Eric Biggers
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

qcom_ice_program_key() currently accepts the key as an array of bytes,
algorithm ID, key size enum, and data unit size.  However both callers
have a struct blk_crypto_key which contains all that information.  Thus
they both have similar code that converts the blk_crypto_key into the
form that qcom_ice_program_key() wants.  Once wrapped key support is
added, the key type would need to be added to the arguments too.

Therefore, this patch changes qcom_ice_program_key() to take in all this
information as a struct blk_crypto_key directly.  The calling code is
updated accordingly.  This ends up being much simpler, and it makes the
key type be passed down automatically once wrapped key support is added.

Based on a patch by Gaurav Kashyap <quic_gaurkash@quicinc.com> that
replaced the byte array argument only.  This patch makes the
blk_crypto_key replace other arguments like the algorithm ID too,
ensuring that there remains only one source of truth.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/mmc/host/sdhci-msm.c | 11 +----------
 drivers/soc/qcom/ice.c       | 23 ++++++++++++-----------
 drivers/ufs/host/ufs-qcom.c  | 11 +----------
 include/soc/qcom/ice.h       | 22 +++-------------------
 4 files changed, 17 insertions(+), 50 deletions(-)

diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 4610f067faca..90d2071b4f10 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -1910,20 +1910,11 @@ static int sdhci_msm_ice_keyslot_program(struct blk_crypto_profile *profile,
 					 unsigned int slot)
 {
 	struct sdhci_msm_host *msm_host =
 		sdhci_msm_host_from_crypto_profile(profile);
 
-	/* Only AES-256-XTS has been tested so far. */
-	if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
-		return -EOPNOTSUPP;
-
-	return qcom_ice_program_key(msm_host->ice,
-				    QCOM_ICE_CRYPTO_ALG_AES_XTS,
-				    QCOM_ICE_CRYPTO_KEY_SIZE_256,
-				    key->raw,
-				    key->crypto_cfg.data_unit_size / 512,
-				    slot);
+	return qcom_ice_program_key(msm_host->ice, slot, key);
 }
 
 static int sdhci_msm_ice_keyslot_evict(struct blk_crypto_profile *profile,
 				       const struct blk_crypto_key *key,
 				       unsigned int slot)
diff --git a/drivers/soc/qcom/ice.c b/drivers/soc/qcom/ice.c
index 393d2d1d275f..04d5884574c5 100644
--- a/drivers/soc/qcom/ice.c
+++ b/drivers/soc/qcom/ice.c
@@ -159,41 +159,42 @@ int qcom_ice_suspend(struct qcom_ice *ice)
 
 	return 0;
 }
 EXPORT_SYMBOL_GPL(qcom_ice_suspend);
 
-int qcom_ice_program_key(struct qcom_ice *ice,
-			 u8 algorithm_id, u8 key_size,
-			 const u8 crypto_key[], u8 data_unit_size,
-			 int slot)
+int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
+			 const struct blk_crypto_key *blk_key)
 {
 	struct device *dev = ice->dev;
 	union {
 		u8 bytes[AES_256_XTS_KEY_SIZE];
 		u32 words[AES_256_XTS_KEY_SIZE / sizeof(u32)];
 	} key;
 	int i;
 	int err;
 
 	/* Only AES-256-XTS has been tested so far. */
-	if (algorithm_id != QCOM_ICE_CRYPTO_ALG_AES_XTS ||
-	    key_size != QCOM_ICE_CRYPTO_KEY_SIZE_256) {
-		dev_err_ratelimited(dev,
-				    "Unhandled crypto capability; algorithm_id=%d, key_size=%d\n",
-				    algorithm_id, key_size);
+	if (blk_key->crypto_cfg.crypto_mode !=
+	    BLK_ENCRYPTION_MODE_AES_256_XTS) {
+		dev_err_ratelimited(dev, "Unsupported crypto mode: %d\n",
+				    blk_key->crypto_cfg.crypto_mode);
 		return -EINVAL;
 	}
 
-	memcpy(key.bytes, crypto_key, AES_256_XTS_KEY_SIZE);
+	if (blk_key->size != AES_256_XTS_KEY_SIZE) {
+		dev_err_ratelimited(dev, "Incorrect key size\n");
+		return -EINVAL;
+	}
+	memcpy(key.bytes, blk_key->raw, AES_256_XTS_KEY_SIZE);
 
 	/* The SCM call requires that the key words are encoded in big endian */
 	for (i = 0; i < ARRAY_SIZE(key.words); i++)
 		__cpu_to_be32s(&key.words[i]);
 
 	err = qcom_scm_ice_set_key(slot, key.bytes, AES_256_XTS_KEY_SIZE,
 				   QCOM_SCM_ICE_CIPHER_AES_256_XTS,
-				   data_unit_size);
+				   blk_key->crypto_cfg.data_unit_size / 512);
 
 	memzero_explicit(&key, sizeof(key));
 
 	return err;
 }
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index de37d5933ca9..40cc9438c208 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -192,21 +192,12 @@ static int ufs_qcom_ice_keyslot_program(struct blk_crypto_profile *profile,
 {
 	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
 	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
 	int err;
 
-	/* Only AES-256-XTS has been tested so far. */
-	if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
-		return -EOPNOTSUPP;
-
 	ufshcd_hold(hba);
-	err = qcom_ice_program_key(host->ice,
-				   QCOM_ICE_CRYPTO_ALG_AES_XTS,
-				   QCOM_ICE_CRYPTO_KEY_SIZE_256,
-				   key->raw,
-				   key->crypto_cfg.data_unit_size / 512,
-				   slot);
+	err = qcom_ice_program_key(host->ice, slot, key);
 	ufshcd_release(hba);
 	return err;
 }
 
 static int ufs_qcom_ice_keyslot_evict(struct blk_crypto_profile *profile,
diff --git a/include/soc/qcom/ice.h b/include/soc/qcom/ice.h
index 5870a94599a2..4cecc7f088b4 100644
--- a/include/soc/qcom/ice.h
+++ b/include/soc/qcom/ice.h
@@ -4,34 +4,18 @@
  */
 
 #ifndef __QCOM_ICE_H__
 #define __QCOM_ICE_H__
 
+#include <linux/blk-crypto.h>
 #include <linux/types.h>
 
 struct qcom_ice;
 
-enum qcom_ice_crypto_key_size {
-	QCOM_ICE_CRYPTO_KEY_SIZE_INVALID	= 0x0,
-	QCOM_ICE_CRYPTO_KEY_SIZE_128		= 0x1,
-	QCOM_ICE_CRYPTO_KEY_SIZE_192		= 0x2,
-	QCOM_ICE_CRYPTO_KEY_SIZE_256		= 0x3,
-	QCOM_ICE_CRYPTO_KEY_SIZE_512		= 0x4,
-};
-
-enum qcom_ice_crypto_alg {
-	QCOM_ICE_CRYPTO_ALG_AES_XTS		= 0x0,
-	QCOM_ICE_CRYPTO_ALG_BITLOCKER_AES_CBC	= 0x1,
-	QCOM_ICE_CRYPTO_ALG_AES_ECB		= 0x2,
-	QCOM_ICE_CRYPTO_ALG_ESSIV_AES_CBC	= 0x3,
-};
-
 int qcom_ice_enable(struct qcom_ice *ice);
 int qcom_ice_resume(struct qcom_ice *ice);
 int qcom_ice_suspend(struct qcom_ice *ice);
-int qcom_ice_program_key(struct qcom_ice *ice,
-			 u8 algorithm_id, u8 key_size,
-			 const u8 crypto_key[], u8 data_unit_size,
-			 int slot);
+int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
+			 const struct blk_crypto_key *blk_key);
 int qcom_ice_evict_key(struct qcom_ice *ice, int slot);
 struct qcom_ice *of_qcom_ice_get(struct device *dev);
 #endif /* __QCOM_ICE_H__ */
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 10/15] blk-crypto: add basic hardware-wrapped key support
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (8 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 09/15] soc: qcom: ice: make qcom_ice_program_key() take struct blk_crypto_key Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 11/15] blk-crypto: show supported key types in sysfs Eric Biggers
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

To prevent keys from being compromised if an attacker acquires read
access to kernel memory, some inline encryption hardware can accept keys
which are wrapped by a per-boot hardware-internal key.  This avoids
needing to keep the raw keys in kernel memory, without limiting the
number of keys that can be used.  Such hardware also supports deriving a
"software secret" for cryptographic tasks that can't be handled by
inline encryption; this is needed for fscrypt to work properly.

To support this hardware, allow struct blk_crypto_key to represent a
hardware-wrapped key as an alternative to a raw key, and make drivers
set flags in struct blk_crypto_profile to indicate which types of keys
they support.  Also add the ->derive_sw_secret() low-level operation,
which drivers supporting wrapped keys must implement.

For more information, see the detailed documentation which this patch
adds to Documentation/block/inline-encryption.rst.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 Documentation/block/inline-encryption.rst | 219 +++++++++++++++++++++-
 block/blk-crypto-fallback.c               |   7 +-
 block/blk-crypto-internal.h               |   1 +
 block/blk-crypto-profile.c                |  46 +++++
 block/blk-crypto.c                        |  53 ++++--
 drivers/md/dm-table.c                     |   1 +
 drivers/mmc/host/cqhci-crypto.c           |   8 +-
 drivers/mmc/host/sdhci-msm.c              |   1 +
 drivers/soc/qcom/ice.c                    |   2 +-
 drivers/ufs/core/ufshcd-crypto.c          |   7 +-
 drivers/ufs/host/ufs-exynos.c             |   3 +-
 drivers/ufs/host/ufs-qcom.c               |   1 +
 fs/crypto/inline_crypt.c                  |   4 +-
 include/linux/blk-crypto-profile.h        |  20 ++
 include/linux/blk-crypto.h                |  72 ++++++-
 15 files changed, 411 insertions(+), 34 deletions(-)

diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
index 90b733422ed4..f03bd5b090d8 100644
--- a/Documentation/block/inline-encryption.rst
+++ b/Documentation/block/inline-encryption.rst
@@ -75,14 +75,14 @@ Constraints and notes
 
 Basic design
 ============
 
 We introduce ``struct blk_crypto_key`` to represent an inline encryption key and
-how it will be used.  This includes the actual bytes of the key; the size of the
-key; the algorithm and data unit size the key will be used with; and the number
-of bytes needed to represent the maximum data unit number the key will be used
-with.
+how it will be used.  This includes the type of the key (raw or
+hardware-wrapped); the actual bytes of the key; the size of the key; the
+algorithm and data unit size the key will be used with; and the number of bytes
+needed to represent the maximum data unit number the key will be used with.
 
 We introduce ``struct bio_crypt_ctx`` to represent an encryption context.  It
 contains a data unit number and a pointer to a blk_crypto_key.  We add pointers
 to a bio_crypt_ctx to ``struct bio`` and ``struct request``; this allows users
 of the block layer (e.g. filesystems) to provide an encryption context when
@@ -299,5 +299,216 @@ and disallow the combination for now. Whenever a device supports integrity, the
 kernel will pretend that the device does not support hardware inline encryption
 (by setting the blk_crypto_profile in the request_queue of the device to NULL).
 When the crypto API fallback is enabled, this means that all bios with and
 encryption context will use the fallback, and IO will complete as usual.  When
 the fallback is disabled, a bio with an encryption context will be failed.
+
+.. _hardware_wrapped_keys:
+
+Hardware-wrapped keys
+=====================
+
+Motivation and threat model
+---------------------------
+
+Linux storage encryption (dm-crypt, fscrypt, eCryptfs, etc.) traditionally
+relies on the raw encryption key(s) being present in kernel memory so that the
+encryption can be performed.  This traditionally isn't seen as a problem because
+the key(s) won't be present during an offline attack, which is the main type of
+attack that storage encryption is intended to protect from.
+
+However, there is an increasing desire to also protect users' data from other
+types of attacks (to the extent possible), including:
+
+- Cold boot attacks, where an attacker with physical access to a system suddenly
+  powers it off, then immediately dumps the system memory to extract recently
+  in-use encryption keys, then uses these keys to decrypt user data on-disk.
+
+- Online attacks where the attacker is able to read kernel memory without fully
+  compromising the system, followed by an offline attack where any extracted
+  keys can be used to decrypt user data on-disk.  An example of such an online
+  attack would be if the attacker is able to run some code on the system that
+  exploits a Meltdown-like vulnerability but is unable to escalate privileges.
+
+- Online attacks where the attacker fully compromises the system, but their data
+  exfiltration is significantly time-limited and/or bandwidth-limited, so in
+  order to completely exfiltrate the data they need to extract the encryption
+  keys to use in a later offline attack.
+
+Hardware-wrapped keys are a feature of inline encryption hardware that is
+designed to protect users' data from the above attacks (to the extent possible),
+without introducing limitations such as a maximum number of keys.
+
+Note that it is impossible to **fully** protect users' data from these attacks.
+Even in the attacks where the attacker "just" gets read access to kernel memory,
+they can still extract any user data that is present in memory, including
+plaintext pagecache pages of encrypted files.  The focus here is just on
+protecting the encryption keys, as those instantly give access to **all** user
+data in any following offline attack, rather than just some of it (where which
+data is included in that "some" might not be controlled by the attacker).
+
+Solution overview
+-----------------
+
+Inline encryption hardware typically has "keyslots" into which software can
+program keys for the hardware to use; the contents of keyslots typically can't
+be read back by software.  As such, the above security goals could be achieved
+if the kernel simply erased its copy of the key(s) after programming them into
+keyslot(s) and thereafter only referred to them via keyslot number.
+
+However, that naive approach runs into a couple problems:
+
+- It limits the number of unlocked keys to the number of keyslots, which
+  typically is a small number.  In cases where there is only one encryption key
+  system-wide (e.g., a full-disk encryption key), that can be tolerable.
+  However, in general there can be many logged-in users with many different
+  keys, and/or many running applications with application-specific encrypted
+  storage areas.  This is especially true if file-based encryption (e.g.
+  fscrypt) is being used.
+
+- Inline crypto engines typically lose the contents of their keyslots if the
+  storage controller (usually UFS or eMMC) is reset.  Resetting the storage
+  controller is a standard error recovery procedure that is executed if certain
+  types of storage errors occur, and such errors can occur at any time.
+  Therefore, when inline crypto is being used, the operating system must always
+  be ready to reprogram the keyslots without user intervention.
+
+Thus, it is important for the kernel to still have a way to "remind" the
+hardware about a key, without actually having the raw key itself.
+
+Somewhat less importantly, it is also desirable that the raw keys are never
+visible to software at all, even while being initially unlocked.  This would
+ensure that a read-only compromise of system memory will never allow a key to be
+extracted to be used off-system, even if it occurs when a key is being unlocked.
+
+To solve all these problems, some vendors of inline encryption hardware have
+made their hardware support *hardware-wrapped keys*.  Hardware-wrapped keys
+are encrypted keys that can only be unwrapped (decrypted) and used by hardware
+-- either by the inline encryption hardware itself, or by a dedicated hardware
+block that can directly provision keys to the inline encryption hardware.
+
+(We refer to them as "hardware-wrapped keys" rather than simply "wrapped keys"
+to add some clarity in cases where there could be other types of wrapped keys,
+such as in file-based encryption.  Key wrapping is a commonly used technique.)
+
+The key which wraps (encrypts) hardware-wrapped keys is a hardware-internal key
+that is never exposed to software; it is either a persistent key (a "long-term
+wrapping key") or a per-boot key (an "ephemeral wrapping key").  The long-term
+wrapped form of the key is what is initially unlocked, but it is erased from
+memory as soon as it is converted into an ephemerally-wrapped key.  In-use
+hardware-wrapped keys are always ephemerally-wrapped, not long-term wrapped.
+
+As inline encryption hardware can only be used to encrypt/decrypt data on-disk,
+the hardware also includes a level of indirection; it doesn't use the unwrapped
+key directly for inline encryption, but rather derives both an inline encryption
+key and a "software secret" from it.  Software can use the "software secret" for
+tasks that can't use the inline encryption hardware, such as filenames
+encryption.  The software secret is not protected from memory compromise.
+
+Key hierarchy
+-------------
+
+Here is the key hierarchy for a hardware-wrapped key::
+
+                       Hardware-wrapped key
+                                |
+                                |
+                          <Hardware KDF>
+                                |
+                  -----------------------------
+                  |                           |
+        Inline encryption key           Software secret
+
+The components are:
+
+- *Hardware-wrapped key*: a key for the hardware's KDF (Key Derivation
+  Function), in ephemerally-wrapped form.  The key wrapping algorithm is a
+  hardware implementation detail that doesn't impact kernel operation, but a
+  strong authenticated encryption algorithm such as AES-256-GCM is recommended.
+
+- *Hardware KDF*: a KDF (Key Derivation Function) which the hardware uses to
+  derive subkeys after unwrapping the wrapped key.  The hardware's choice of KDF
+  doesn't impact kernel operation, but it does need to be known for testing
+  purposes, and it's also assumed to have at least a 256-bit security strength.
+  All known hardware uses the SP800-108 KDF in Counter Mode with AES-256-CMAC,
+  with a particular choice of labels and contexts; new hardware should use this
+  already-vetted KDF.
+
+- *Inline encryption key*: a derived key which the hardware directly provisions
+  to a keyslot of the inline encryption hardware, without exposing it to
+  software.  In all known hardware, this will always be an AES-256-XTS key.
+  However, in principle other encryption algorithms could be supported too.
+  Hardware must derive distinct subkeys for each supported encryption algorithm.
+
+- *Software secret*: a derived key which the hardware returns to software so
+  that software can use it for cryptographic tasks that can't use inline
+  encryption.  This value is cryptographically isolated from the inline
+  encryption key, i.e. knowing one doesn't reveal the other.  (The KDF ensures
+  this.)  Currently, the software secret is always 32 bytes and thus is suitable
+  for cryptographic applications that require up to a 256-bit security strength.
+  Some use cases (e.g. full-disk encryption) won't require the software secret.
+
+Example: in the case of fscrypt, the fscrypt master key (the key that protects a
+particular set of encrypted directories) is made hardware-wrapped.  The inline
+encryption key is used as the file contents encryption key, while the software
+secret (rather than the master key directly) is used to key fscrypt's KDF
+(HKDF-SHA512) to derive other subkeys such as filenames encryption keys.
+
+Note that currently this design assumes a single inline encryption key per
+hardware-wrapped key, without any further key derivation.  Thus, in the case of
+fscrypt, currently hardware-wrapped keys are only compatible with the "inline
+encryption optimized" settings, which use one file contents encryption key per
+encryption policy rather than one per file.  This design could be extended to
+make the hardware derive per-file keys using per-file nonces passed down the
+storage stack, and in fact some hardware already supports this; future work is
+planned to remove this limitation by adding the corresponding kernel support.
+
+Kernel support
+--------------
+
+The inline encryption support of the kernel's block layer ("blk-crypto") has
+been extended to support hardware-wrapped keys as an alternative to raw keys,
+when hardware support is available.  This works in the following way:
+
+- A ``key_types_supported`` field is added to the crypto capabilities in
+  ``struct blk_crypto_profile``.  This allows device drivers to declare that
+  they support raw keys, hardware-wrapped keys, or both.
+
+- ``struct blk_crypto_key`` can now contain a hardware-wrapped key as an
+  alternative to a raw key; a ``key_type`` field is added to
+  ``struct blk_crypto_config`` to distinguish between the different key types.
+  This allows users of blk-crypto to en/decrypt data using a hardware-wrapped
+  key in a way very similar to using a raw key.
+
+- A new method ``blk_crypto_ll_ops::derive_sw_secret`` is added.  Device drivers
+  that support hardware-wrapped keys must implement this method.  Users of
+  blk-crypto can call ``blk_crypto_derive_sw_secret()`` to access this method.
+
+- The programming and eviction of hardware-wrapped keys happens via
+  ``blk_crypto_ll_ops::keyslot_program`` and
+  ``blk_crypto_ll_ops::keyslot_evict``, just like it does for raw keys.  If a
+  driver supports hardware-wrapped keys, then it must handle hardware-wrapped
+  keys being passed to these methods.
+
+blk-crypto-fallback doesn't support hardware-wrapped keys.  Therefore,
+hardware-wrapped keys can only be used with actual inline encryption hardware.
+
+Testability
+-----------
+
+Both the hardware KDF and the inline encryption itself are well-defined
+algorithms that don't depend on any secrets other than the unwrapped key.
+Therefore, if the unwrapped key is known to software, these algorithms can be
+reproduced in software in order to verify the ciphertext that is written to disk
+by the inline encryption hardware.
+
+However, the unwrapped key will only be known to software for testing if the
+"import" functionality is used.  Proper testing is not possible in the
+"generate" case where the hardware generates the key itself.  The correct
+operation of the "generate" mode thus relies on the security and correctness of
+the hardware RNG and its use to generate the key, as well as the testing of the
+"import" mode as that should cover all parts other than the key generation.
+
+For an example of a test that verifies the ciphertext written to disk in the
+"import" mode, see the fscrypt hardware-wrapped key tests in xfstests, or
+`Android's vts_kernel_encryption_test
+<https://android.googlesource.com/platform/test/vts-testcase/kernel/+/refs/heads/main/encryption/>`_.
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 29a205482617..f154be0b575a 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -85,11 +85,11 @@ static struct bio_set crypto_bio_split;
 
 /*
  * This is the key we set when evicting a keyslot. This *should* be the all 0's
  * key, but AES-XTS rejects that key, so we use some random bytes instead.
  */
-static u8 blank_key[BLK_CRYPTO_MAX_KEY_SIZE];
+static u8 blank_key[BLK_CRYPTO_MAX_RAW_KEY_SIZE];
 
 static void blk_crypto_fallback_evict_keyslot(unsigned int slot)
 {
 	struct blk_crypto_fallback_keyslot *slotp = &blk_crypto_keyslots[slot];
 	enum blk_crypto_mode_num crypto_mode = slotp->crypto_mode;
@@ -117,11 +117,11 @@ blk_crypto_fallback_keyslot_program(struct blk_crypto_profile *profile,
 	if (crypto_mode != slotp->crypto_mode &&
 	    slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
 		blk_crypto_fallback_evict_keyslot(slot);
 
 	slotp->crypto_mode = crypto_mode;
-	err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw,
+	err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->bytes,
 				     key->size);
 	if (err) {
 		blk_crypto_fallback_evict_keyslot(slot);
 		return err;
 	}
@@ -537,11 +537,11 @@ static int blk_crypto_fallback_init(void)
 	int err;
 
 	if (blk_crypto_fallback_inited)
 		return 0;
 
-	get_random_bytes(blank_key, BLK_CRYPTO_MAX_KEY_SIZE);
+	get_random_bytes(blank_key, sizeof(blank_key));
 
 	err = bioset_init(&crypto_bio_split, 64, 0, 0);
 	if (err)
 		goto out;
 
@@ -559,10 +559,11 @@ static int blk_crypto_fallback_init(void)
 		goto fail_free_profile;
 	err = -ENOMEM;
 
 	blk_crypto_fallback_profile->ll_ops = blk_crypto_fallback_ll_ops;
 	blk_crypto_fallback_profile->max_dun_bytes_supported = BLK_CRYPTO_MAX_IV_SIZE;
+	blk_crypto_fallback_profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
 
 	/* All blk-crypto modes have a crypto API fallback. */
 	for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++)
 		blk_crypto_fallback_profile->modes_supported[i] = 0xFFFFFFFF;
 	blk_crypto_fallback_profile->modes_supported[BLK_ENCRYPTION_MODE_INVALID] = 0;
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index 93a141979694..1893df9a8f06 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -12,10 +12,11 @@
 /* Represents a crypto mode supported by blk-crypto  */
 struct blk_crypto_mode {
 	const char *name; /* name of this mode, shown in sysfs */
 	const char *cipher_str; /* crypto API name (for fallback case) */
 	unsigned int keysize; /* key size in bytes */
+	unsigned int security_strength; /* security strength in bytes */
 	unsigned int ivsize; /* iv size in bytes */
 };
 
 extern const struct blk_crypto_mode blk_crypto_modes[];
 
diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c
index 7fabc883e39f..1b92276ed2fc 100644
--- a/block/blk-crypto-profile.c
+++ b/block/blk-crypto-profile.c
@@ -350,10 +350,12 @@ bool __blk_crypto_cfg_supported(struct blk_crypto_profile *profile,
 		return false;
 	if (!(profile->modes_supported[cfg->crypto_mode] & cfg->data_unit_size))
 		return false;
 	if (profile->max_dun_bytes_supported < cfg->dun_bytes)
 		return false;
+	if (!(profile->key_types_supported & cfg->key_type))
+		return false;
 	return true;
 }
 
 /*
  * This is an internal function that evicts a key from an inline encryption
@@ -460,10 +462,48 @@ bool blk_crypto_register(struct blk_crypto_profile *profile,
 	q->crypto_profile = profile;
 	return true;
 }
 EXPORT_SYMBOL_GPL(blk_crypto_register);
 
+/**
+ * blk_crypto_derive_sw_secret() - Derive software secret from wrapped key
+ * @bdev: a block device that supports hardware-wrapped keys
+ * @eph_key: the hardware-wrapped key in ephemerally-wrapped form
+ * @eph_key_size: size of @eph_key in bytes
+ * @sw_secret: (output) the software secret
+ *
+ * Given a hardware-wrapped key in ephemerally-wrapped form (the same form that
+ * it is used for I/O), ask the hardware to derive the secret which software can
+ * use for cryptographic tasks other than inline encryption.  This secret is
+ * guaranteed to be cryptographically isolated from the inline encryption key,
+ * i.e. derived with a different KDF context.
+ *
+ * Return: 0 on success, -EOPNOTSUPP if the block device doesn't support
+ *	   hardware-wrapped keys, -EBADMSG if the key isn't a valid
+ *	   hardware-wrapped key, or another -errno code.
+ */
+int blk_crypto_derive_sw_secret(struct block_device *bdev,
+				const u8 *eph_key, size_t eph_key_size,
+				u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
+{
+	struct blk_crypto_profile *profile =
+		bdev_get_queue(bdev)->crypto_profile;
+	int err;
+
+	if (!profile)
+		return -EOPNOTSUPP;
+	if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED))
+		return -EOPNOTSUPP;
+	if (!profile->ll_ops.derive_sw_secret)
+		return -EOPNOTSUPP;
+	blk_crypto_hw_enter(profile);
+	err = profile->ll_ops.derive_sw_secret(profile, eph_key, eph_key_size,
+					       sw_secret);
+	blk_crypto_hw_exit(profile);
+	return err;
+}
+
 /**
  * blk_crypto_intersect_capabilities() - restrict supported crypto capabilities
  *					 by child device
  * @parent: the crypto profile for the parent device
  * @child: the crypto profile for the child device, or NULL
@@ -483,14 +523,16 @@ void blk_crypto_intersect_capabilities(struct blk_crypto_profile *parent,
 		parent->max_dun_bytes_supported =
 			min(parent->max_dun_bytes_supported,
 			    child->max_dun_bytes_supported);
 		for (i = 0; i < ARRAY_SIZE(child->modes_supported); i++)
 			parent->modes_supported[i] &= child->modes_supported[i];
+		parent->key_types_supported &= child->key_types_supported;
 	} else {
 		parent->max_dun_bytes_supported = 0;
 		memset(parent->modes_supported, 0,
 		       sizeof(parent->modes_supported));
+		parent->key_types_supported = 0;
 	}
 }
 EXPORT_SYMBOL_GPL(blk_crypto_intersect_capabilities);
 
 /**
@@ -519,10 +561,13 @@ bool blk_crypto_has_capabilities(const struct blk_crypto_profile *target,
 
 	if (reference->max_dun_bytes_supported >
 	    target->max_dun_bytes_supported)
 		return false;
 
+	if (reference->key_types_supported & ~target->key_types_supported)
+		return false;
+
 	return true;
 }
 EXPORT_SYMBOL_GPL(blk_crypto_has_capabilities);
 
 /**
@@ -553,7 +598,8 @@ void blk_crypto_update_capabilities(struct blk_crypto_profile *dst,
 {
 	memcpy(dst->modes_supported, src->modes_supported,
 	       sizeof(dst->modes_supported));
 
 	dst->max_dun_bytes_supported = src->max_dun_bytes_supported;
+	dst->key_types_supported = src->key_types_supported;
 }
 EXPORT_SYMBOL_GPL(blk_crypto_update_capabilities);
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 4d760b092deb..b55b3d8bffa0 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -21,28 +21,32 @@
 const struct blk_crypto_mode blk_crypto_modes[] = {
 	[BLK_ENCRYPTION_MODE_AES_256_XTS] = {
 		.name = "AES-256-XTS",
 		.cipher_str = "xts(aes)",
 		.keysize = 64,
+		.security_strength = 32,
 		.ivsize = 16,
 	},
 	[BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV] = {
 		.name = "AES-128-CBC-ESSIV",
 		.cipher_str = "essiv(cbc(aes),sha256)",
 		.keysize = 16,
+		.security_strength = 16,
 		.ivsize = 16,
 	},
 	[BLK_ENCRYPTION_MODE_ADIANTUM] = {
 		.name = "Adiantum",
 		.cipher_str = "adiantum(xchacha12,aes)",
 		.keysize = 32,
+		.security_strength = 32,
 		.ivsize = 32,
 	},
 	[BLK_ENCRYPTION_MODE_SM4_XTS] = {
 		.name = "SM4-XTS",
 		.cipher_str = "xts(sm4)",
 		.keysize = 32,
+		.security_strength = 16,
 		.ivsize = 16,
 	},
 };
 
 /*
@@ -74,13 +78,19 @@ static int __init bio_crypt_ctx_init(void)
 		goto out_no_mem;
 
 	/* This is assumed in various places. */
 	BUILD_BUG_ON(BLK_ENCRYPTION_MODE_INVALID != 0);
 
-	/* Sanity check that no algorithm exceeds the defined limits. */
+	/*
+	 * Validate the crypto mode properties.  This ideally would be done with
+	 * static assertions, but boot-time checks are the next best thing.
+	 */
 	for (i = 0; i < BLK_ENCRYPTION_MODE_MAX; i++) {
-		BUG_ON(blk_crypto_modes[i].keysize > BLK_CRYPTO_MAX_KEY_SIZE);
+		BUG_ON(blk_crypto_modes[i].keysize >
+		       BLK_CRYPTO_MAX_RAW_KEY_SIZE);
+		BUG_ON(blk_crypto_modes[i].security_strength >
+		       blk_crypto_modes[i].keysize);
 		BUG_ON(blk_crypto_modes[i].ivsize > BLK_CRYPTO_MAX_IV_SIZE);
 	}
 
 	return 0;
 out_no_mem:
@@ -313,21 +323,24 @@ int __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
 }
 
 /**
  * blk_crypto_init_key() - Prepare a key for use with blk-crypto
  * @blk_key: Pointer to the blk_crypto_key to initialize.
- * @raw_key: Pointer to the raw key. Must be the correct length for the chosen
- *	     @crypto_mode; see blk_crypto_modes[].
+ * @key_bytes: the bytes of the key
+ * @key_size: size of the key in bytes
+ * @key_type: type of the key -- either raw or hardware-wrapped
  * @crypto_mode: identifier for the encryption algorithm to use
  * @dun_bytes: number of bytes that will be used to specify the DUN when this
  *	       key is used
  * @data_unit_size: the data unit size to use for en/decryption
  *
  * Return: 0 on success, -errno on failure.  The caller is responsible for
- *	   zeroizing both blk_key and raw_key when done with them.
+ *	   zeroizing both blk_key and key_bytes when done with them.
  */
-int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+int blk_crypto_init_key(struct blk_crypto_key *blk_key,
+			const u8 *key_bytes, size_t key_size,
+			enum blk_crypto_key_type key_type,
 			enum blk_crypto_mode_num crypto_mode,
 			unsigned int dun_bytes,
 			unsigned int data_unit_size)
 {
 	const struct blk_crypto_mode *mode;
@@ -336,25 +349,37 @@ int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
 
 	if (crypto_mode >= ARRAY_SIZE(blk_crypto_modes))
 		return -EINVAL;
 
 	mode = &blk_crypto_modes[crypto_mode];
-	if (mode->keysize == 0)
+	switch (key_type) {
+	case BLK_CRYPTO_KEY_TYPE_RAW:
+		if (key_size != mode->keysize)
+			return -EINVAL;
+		break;
+	case BLK_CRYPTO_KEY_TYPE_HW_WRAPPED:
+		if (key_size < mode->security_strength ||
+		    key_size > BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE)
+			return -EINVAL;
+		break;
+	default:
 		return -EINVAL;
+	}
 
 	if (dun_bytes == 0 || dun_bytes > mode->ivsize)
 		return -EINVAL;
 
 	if (!is_power_of_2(data_unit_size))
 		return -EINVAL;
 
 	blk_key->crypto_cfg.crypto_mode = crypto_mode;
 	blk_key->crypto_cfg.dun_bytes = dun_bytes;
 	blk_key->crypto_cfg.data_unit_size = data_unit_size;
+	blk_key->crypto_cfg.key_type = key_type;
 	blk_key->data_unit_size_bits = ilog2(data_unit_size);
-	blk_key->size = mode->keysize;
-	memcpy(blk_key->raw, raw_key, mode->keysize);
+	blk_key->size = key_size;
+	memcpy(blk_key->bytes, key_bytes, key_size);
 
 	return 0;
 }
 
 bool blk_crypto_config_supported_natively(struct block_device *bdev,
@@ -370,12 +395,14 @@ bool blk_crypto_config_supported_natively(struct block_device *bdev,
  * blk-crypto-fallback is enabled and supports the cfg).
  */
 bool blk_crypto_config_supported(struct block_device *bdev,
 				 const struct blk_crypto_config *cfg)
 {
-	return IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) ||
-	       blk_crypto_config_supported_natively(bdev, cfg);
+	if (IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) &&
+	    cfg->key_type == BLK_CRYPTO_KEY_TYPE_RAW)
+		return true;
+	return blk_crypto_config_supported_natively(bdev, cfg);
 }
 
 /**
  * blk_crypto_start_using_key() - Start using a blk_crypto_key on a device
  * @bdev: block device to operate on
@@ -394,10 +421,14 @@ bool blk_crypto_config_supported(struct block_device *bdev,
 int blk_crypto_start_using_key(struct block_device *bdev,
 			       const struct blk_crypto_key *key)
 {
 	if (blk_crypto_config_supported_natively(bdev, &key->crypto_cfg))
 		return 0;
+	if (key->crypto_cfg.key_type != BLK_CRYPTO_KEY_TYPE_RAW) {
+		pr_warn_once("tried to use wrapped key, but hardware doesn't support it\n");
+		return -EOPNOTSUPP;
+	}
 	return blk_crypto_fallback_start_using_mode(key->crypto_cfg.crypto_mode);
 }
 
 /**
  * blk_crypto_evict_key() - Evict a blk_crypto_key from a block_device
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index bd8b796ae683..3e2ab66a46ae 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1248,10 +1248,11 @@ static int dm_table_construct_crypto_profile(struct dm_table *t)
 	blk_crypto_profile_init(profile, 0);
 	profile->ll_ops.keyslot_evict = dm_keyslot_evict;
 	profile->max_dun_bytes_supported = UINT_MAX;
 	memset(profile->modes_supported, 0xFF,
 	       sizeof(profile->modes_supported));
+	profile->key_types_supported = ~0;
 
 	for (i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
 		if (!dm_target_passes_crypto(ti->type)) {
diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
index cb8044093402..5a467098a0d6 100644
--- a/drivers/mmc/host/cqhci-crypto.c
+++ b/drivers/mmc/host/cqhci-crypto.c
@@ -82,15 +82,15 @@ static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
 	cfg.crypto_cap_idx = cap_idx;
 	cfg.config_enable = CQHCI_CRYPTO_CONFIGURATION_ENABLE;
 
 	if (ccap_array[cap_idx].algorithm_id == CQHCI_CRYPTO_ALG_AES_XTS) {
 		/* In XTS mode, the blk_crypto_key's size is already doubled */
-		memcpy(cfg.crypto_key, key->raw, key->size/2);
+		memcpy(cfg.crypto_key, key->bytes, key->size/2);
 		memcpy(cfg.crypto_key + CQHCI_CRYPTO_KEY_MAX_SIZE/2,
-		       key->raw + key->size/2, key->size/2);
+		       key->bytes + key->size/2, key->size/2);
 	} else {
-		memcpy(cfg.crypto_key, key->raw, key->size);
+		memcpy(cfg.crypto_key, key->bytes, key->size);
 	}
 
 	cqhci_crypto_program_key(cq_host, &cfg, slot);
 
 	memzero_explicit(&cfg, sizeof(cfg));
@@ -202,10 +202,12 @@ int cqhci_crypto_init(struct cqhci_host *cq_host)
 	profile->dev = dev;
 
 	/* Unfortunately, CQHCI crypto only supports 32 DUN bits. */
 	profile->max_dun_bytes_supported = 4;
 
+	profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
+
 	/*
 	 * Cache all the crypto capabilities and advertise the supported crypto
 	 * modes and data unit sizes to the block layer.
 	 */
 	for (cap_idx = 0; cap_idx < cq_host->crypto_capabilities.num_crypto_cap;
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 90d2071b4f10..439e5907f940 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -1844,10 +1844,11 @@ static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
 	if (err)
 		return err;
 
 	profile->ll_ops = sdhci_msm_crypto_ops;
 	profile->max_dun_bytes_supported = 4;
+	profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
 	profile->dev = dev;
 
 	/*
 	 * Currently this driver only supports AES-256-XTS.  All known versions
 	 * of ICE support it, but to be safe make sure it is really declared in
diff --git a/drivers/soc/qcom/ice.c b/drivers/soc/qcom/ice.c
index 04d5884574c5..78780fd508f0 100644
--- a/drivers/soc/qcom/ice.c
+++ b/drivers/soc/qcom/ice.c
@@ -182,11 +182,11 @@ int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
 
 	if (blk_key->size != AES_256_XTS_KEY_SIZE) {
 		dev_err_ratelimited(dev, "Incorrect key size\n");
 		return -EINVAL;
 	}
-	memcpy(key.bytes, blk_key->raw, AES_256_XTS_KEY_SIZE);
+	memcpy(key.bytes, blk_key->bytes, AES_256_XTS_KEY_SIZE);
 
 	/* The SCM call requires that the key words are encoded in big endian */
 	for (i = 0; i < ARRAY_SIZE(key.words); i++)
 		__cpu_to_be32s(&key.words[i]);
 
diff --git a/drivers/ufs/core/ufshcd-crypto.c b/drivers/ufs/core/ufshcd-crypto.c
index 694ff7578fc1..9e63a9d3cb7e 100644
--- a/drivers/ufs/core/ufshcd-crypto.c
+++ b/drivers/ufs/core/ufshcd-crypto.c
@@ -70,15 +70,15 @@ static int ufshcd_crypto_keyslot_program(struct blk_crypto_profile *profile,
 	cfg.crypto_cap_idx = cap_idx;
 	cfg.config_enable = UFS_CRYPTO_CONFIGURATION_ENABLE;
 
 	if (ccap_array[cap_idx].algorithm_id == UFS_CRYPTO_ALG_AES_XTS) {
 		/* In XTS mode, the blk_crypto_key's size is already doubled */
-		memcpy(cfg.crypto_key, key->raw, key->size/2);
+		memcpy(cfg.crypto_key, key->bytes, key->size/2);
 		memcpy(cfg.crypto_key + UFS_CRYPTO_KEY_MAX_SIZE/2,
-		       key->raw + key->size/2, key->size/2);
+		       key->bytes + key->size/2, key->size/2);
 	} else {
-		memcpy(cfg.crypto_key, key->raw, key->size);
+		memcpy(cfg.crypto_key, key->bytes, key->size);
 	}
 
 	ufshcd_program_key(hba, &cfg, slot);
 
 	memzero_explicit(&cfg, sizeof(cfg));
@@ -183,10 +183,11 @@ int ufshcd_hba_init_crypto_capabilities(struct ufs_hba *hba)
 		goto out;
 
 	hba->crypto_profile.ll_ops = ufshcd_crypto_ops;
 	/* UFS only supports 8 bytes for any DUN */
 	hba->crypto_profile.max_dun_bytes_supported = 8;
+	hba->crypto_profile.key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
 	hba->crypto_profile.dev = hba->dev;
 
 	/*
 	 * Cache all the UFS crypto capabilities and advertise the supported
 	 * crypto modes and data unit sizes to the block layer.
diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
index 13dd5dfc03eb..6a415d9bdc85 100644
--- a/drivers/ufs/host/ufs-exynos.c
+++ b/drivers/ufs/host/ufs-exynos.c
@@ -1318,10 +1318,11 @@ static void exynos_ufs_fmp_init(struct ufs_hba *hba, struct exynos_ufs *ufs)
 		dev_err(hba->dev, "Failed to initialize crypto profile: %d\n",
 			err);
 		return;
 	}
 	profile->max_dun_bytes_supported = AES_BLOCK_SIZE;
+	profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
 	profile->dev = hba->dev;
 	profile->modes_supported[BLK_ENCRYPTION_MODE_AES_256_XTS] =
 		DATA_UNIT_SIZE;
 
 	/* Advertise crypto support to ufshcd-core. */
@@ -1364,11 +1365,11 @@ static inline __be64 fmp_key_word(const u8 *key, int j)
 static int exynos_ufs_fmp_fill_prdt(struct ufs_hba *hba,
 				    const struct bio_crypt_ctx *crypt_ctx,
 				    void *prdt, unsigned int num_segments)
 {
 	struct fmp_sg_entry *fmp_prdt = prdt;
-	const u8 *enckey = crypt_ctx->bc_key->raw;
+	const u8 *enckey = crypt_ctx->bc_key->bytes;
 	const u8 *twkey = enckey + AES_KEYSIZE_256;
 	u64 dun_lo = crypt_ctx->bc_dun[0];
 	u64 dun_hi = crypt_ctx->bc_dun[1];
 	unsigned int i;
 
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 40cc9438c208..4adf017b523d 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -145,10 +145,11 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
 	if (err)
 		return err;
 
 	profile->ll_ops = ufs_qcom_crypto_ops;
 	profile->max_dun_bytes_supported = 8;
+	profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
 	profile->dev = dev;
 
 	/*
 	 * Currently this driver only supports AES-256-XTS.  All known versions
 	 * of ICE support it, but to be safe make sure it is really declared in
diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
index 40de69860dcf..7fa53d30aec3 100644
--- a/fs/crypto/inline_crypt.c
+++ b/fs/crypto/inline_crypt.c
@@ -128,10 +128,11 @@ int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci)
 	 * crypto configuration that the file would use.
 	 */
 	crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode;
 	crypto_cfg.data_unit_size = 1U << ci->ci_data_unit_bits;
 	crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci);
+	crypto_cfg.key_type = BLK_CRYPTO_KEY_TYPE_RAW;
 
 	devs = fscrypt_get_devices(sb, &num_devs);
 	if (IS_ERR(devs))
 		return PTR_ERR(devs);
 
@@ -164,11 +165,12 @@ int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
 
 	blk_key = kmalloc(sizeof(*blk_key), GFP_KERNEL);
 	if (!blk_key)
 		return -ENOMEM;
 
-	err = blk_crypto_init_key(blk_key, raw_key, crypto_mode,
+	err = blk_crypto_init_key(blk_key, raw_key, ci->ci_mode->keysize,
+				  BLK_CRYPTO_KEY_TYPE_RAW, crypto_mode,
 				  fscrypt_get_dun_bytes(ci),
 				  1U << ci->ci_data_unit_bits);
 	if (err) {
 		fscrypt_err(inode, "error %d initializing blk-crypto key", err);
 		goto fail;
diff --git a/include/linux/blk-crypto-profile.h b/include/linux/blk-crypto-profile.h
index 90ab33cb5d0e..7764b4f7b45b 100644
--- a/include/linux/blk-crypto-profile.h
+++ b/include/linux/blk-crypto-profile.h
@@ -55,10 +55,24 @@ struct blk_crypto_ll_ops {
 	 * Must return 0 on success, or -errno on failure.
 	 */
 	int (*keyslot_evict)(struct blk_crypto_profile *profile,
 			     const struct blk_crypto_key *key,
 			     unsigned int slot);
+
+	/**
+	 * @derive_sw_secret: Derive the software secret from a hardware-wrapped
+	 *		      key in ephemerally-wrapped form.
+	 *
+	 * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED
+	 * is supported.
+	 *
+	 * Must return 0 on success, -EBADMSG if the key is invalid, or another
+	 * -errno code on other errors.
+	 */
+	int (*derive_sw_secret)(struct blk_crypto_profile *profile,
+				const u8 *eph_key, size_t eph_key_size,
+				u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]);
 };
 
 /**
  * struct blk_crypto_profile - inline encryption profile for a device
  *
@@ -82,10 +96,16 @@ struct blk_crypto_profile {
 	 * specifying the data unit number (DUN).  Specifically, the range of
 	 * supported DUNs is 0 through (1 << (8 * max_dun_bytes_supported)) - 1.
 	 */
 	unsigned int max_dun_bytes_supported;
 
+	/**
+	 * @key_types_supported: A bitmask of the supported key types:
+	 * BLK_CRYPTO_KEY_TYPE_RAW and/or BLK_CRYPTO_KEY_TYPE_HW_WRAPPED.
+	 */
+	unsigned int key_types_supported;
+
 	/**
 	 * @modes_supported: Array of bitmasks that specifies whether each
 	 * combination of crypto mode and data unit size is supported.
 	 * Specifically, the i'th bit of modes_supported[crypto_mode] is set if
 	 * crypto_mode can be used with a data unit size of (1 << i).  Note that
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index 5e5822c18ee4..0e63287e2175 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -4,10 +4,11 @@
  */
 
 #ifndef __LINUX_BLK_CRYPTO_H
 #define __LINUX_BLK_CRYPTO_H
 
+#include <linux/minmax.h>
 #include <linux/types.h>
 
 enum blk_crypto_mode_num {
 	BLK_ENCRYPTION_MODE_INVALID,
 	BLK_ENCRYPTION_MODE_AES_256_XTS,
@@ -15,43 +16,94 @@ enum blk_crypto_mode_num {
 	BLK_ENCRYPTION_MODE_ADIANTUM,
 	BLK_ENCRYPTION_MODE_SM4_XTS,
 	BLK_ENCRYPTION_MODE_MAX,
 };
 
-#define BLK_CRYPTO_MAX_KEY_SIZE		64
+/*
+ * Supported types of keys.  Must be bitflags due to their use in
+ * blk_crypto_profile::key_types_supported.
+ */
+enum blk_crypto_key_type {
+	/*
+	 * Raw keys (i.e. "software keys").  These keys are simply kept in raw,
+	 * plaintext form in kernel memory.
+	 */
+	BLK_CRYPTO_KEY_TYPE_RAW = 1 << 0,
+
+	/*
+	 * Hardware-wrapped keys.  These keys are only present in kernel memory
+	 * in ephemerally-wrapped form, and they can only be unwrapped by
+	 * dedicated hardware.  For details, see the "Hardware-wrapped keys"
+	 * section of Documentation/block/inline-encryption.rst.
+	 */
+	BLK_CRYPTO_KEY_TYPE_HW_WRAPPED = 1 << 1,
+};
+
+/*
+ * Currently the maximum raw key size is 64 bytes, as that is the key size of
+ * BLK_ENCRYPTION_MODE_AES_256_XTS which takes the longest key.
+ *
+ * The maximum hardware-wrapped key size depends on the hardware's key wrapping
+ * algorithm, which is a hardware implementation detail, so it isn't precisely
+ * specified.  But currently 128 bytes is plenty in practice.  Implementations
+ * are recommended to wrap a 32-byte key for the hardware KDF with AES-256-GCM,
+ * which should result in a size closer to 64 bytes than 128.
+ *
+ * Both of these values can trivially be increased if ever needed.
+ */
+#define BLK_CRYPTO_MAX_RAW_KEY_SIZE		64
+#define BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE	128
+
+#define BLK_CRYPTO_MAX_ANY_KEY_SIZE \
+	MAX(BLK_CRYPTO_MAX_RAW_KEY_SIZE, BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE)
+
+/*
+ * Size of the "software secret" which can be derived from a hardware-wrapped
+ * key.  This is currently always 32 bytes.  Note, the choice of 32 bytes
+ * assumes that the software secret is only used directly for algorithms that
+ * don't require more than a 256-bit key to get the desired security strength.
+ * If it were to be used e.g. directly as an AES-256-XTS key, then this would
+ * need to be increased (which is possible if hardware supports it, but care
+ * would need to be taken to avoid breaking users who need exactly 32 bytes).
+ */
+#define BLK_CRYPTO_SW_SECRET_SIZE	32
+
 /**
  * struct blk_crypto_config - an inline encryption key's crypto configuration
  * @crypto_mode: encryption algorithm this key is for
  * @data_unit_size: the data unit size for all encryption/decryptions with this
  *	key.  This is the size in bytes of each individual plaintext and
  *	ciphertext.  This is always a power of 2.  It might be e.g. the
  *	filesystem block size or the disk sector size.
  * @dun_bytes: the maximum number of bytes of DUN used when using this key
+ * @key_type: the type of this key -- either raw or hardware-wrapped
  */
 struct blk_crypto_config {
 	enum blk_crypto_mode_num crypto_mode;
 	unsigned int data_unit_size;
 	unsigned int dun_bytes;
+	enum blk_crypto_key_type key_type;
 };
 
 /**
  * struct blk_crypto_key - an inline encryption key
- * @crypto_cfg: the crypto configuration (like crypto_mode, key size) for this
- *		key
+ * @crypto_cfg: the crypto mode, data unit size, key type, and other
+ *		characteristics of this key and how it will be used
  * @data_unit_size_bits: log2 of data_unit_size
- * @size: size of this key in bytes (determined by @crypto_cfg.crypto_mode)
- * @raw: the raw bytes of this key.  Only the first @size bytes are used.
+ * @size: size of this key in bytes.  The size of a raw key is fixed for a given
+ *	  crypto mode, but the size of a hardware-wrapped key can vary.
+ * @bytes: the bytes of this key.  Only the first @size bytes are significant.
  *
  * A blk_crypto_key is immutable once created, and many bios can reference it at
  * the same time.  It must not be freed until all bios using it have completed
  * and it has been evicted from all devices on which it may have been used.
  */
 struct blk_crypto_key {
 	struct blk_crypto_config crypto_cfg;
 	unsigned int data_unit_size_bits;
 	unsigned int size;
-	u8 raw[BLK_CRYPTO_MAX_KEY_SIZE];
+	u8 bytes[BLK_CRYPTO_MAX_ANY_KEY_SIZE];
 };
 
 #define BLK_CRYPTO_MAX_IV_SIZE		32
 #define BLK_CRYPTO_DUN_ARRAY_SIZE	(BLK_CRYPTO_MAX_IV_SIZE / sizeof(u64))
 
@@ -85,11 +137,13 @@ void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key,
 
 bool bio_crypt_dun_is_contiguous(const struct bio_crypt_ctx *bc,
 				 unsigned int bytes,
 				 const u64 next_dun[BLK_CRYPTO_DUN_ARRAY_SIZE]);
 
-int blk_crypto_init_key(struct blk_crypto_key *blk_key, const u8 *raw_key,
+int blk_crypto_init_key(struct blk_crypto_key *blk_key,
+			const u8 *key_bytes, size_t key_size,
+			enum blk_crypto_key_type key_type,
 			enum blk_crypto_mode_num crypto_mode,
 			unsigned int dun_bytes,
 			unsigned int data_unit_size);
 
 int blk_crypto_start_using_key(struct block_device *bdev,
@@ -101,10 +155,14 @@ void blk_crypto_evict_key(struct block_device *bdev,
 bool blk_crypto_config_supported_natively(struct block_device *bdev,
 					  const struct blk_crypto_config *cfg);
 bool blk_crypto_config_supported(struct block_device *bdev,
 				 const struct blk_crypto_config *cfg);
 
+int blk_crypto_derive_sw_secret(struct block_device *bdev,
+				const u8 *eph_key, size_t eph_key_size,
+				u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]);
+
 #else /* CONFIG_BLK_INLINE_ENCRYPTION */
 
 static inline bool bio_has_crypt_ctx(struct bio *bio)
 {
 	return false;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 11/15] blk-crypto: show supported key types in sysfs
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (9 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 10/15] blk-crypto: add basic hardware-wrapped key support Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 12/15] blk-crypto: add ioctls to create and prepare hardware-wrapped keys Eric Biggers
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

Add sysfs files that indicate which type(s) of keys are supported by the
inline encryption hardware associated with a particular request queue:

	/sys/block/$disk/queue/crypto/hw_wrapped_keys
	/sys/block/$disk/queue/crypto/raw_keys

Userspace can use the presence or absence of these files to decide what
encyption settings to use.

Don't use a single key_type file, as devices might support both key
types at the same time.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 Documentation/ABI/stable/sysfs-block | 18 ++++++++++++++
 block/blk-crypto-sysfs.c             | 35 ++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 0cceb2badc83..75f0997926e9 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -227,10 +227,20 @@ Description:
 		subdirectory contains files which describe the inline encryption
 		capabilities of the device.  For more information about inline
 		encryption, refer to Documentation/block/inline-encryption.rst.
 
 
+What:		/sys/block/<disk>/queue/crypto/hw_wrapped_keys
+Contact:	linux-block@vger.kernel.org
+Description:
+		[RO] The presence of this file indicates that the device
+		supports hardware-wrapped inline encryption keys, i.e. key blobs
+		that can only be unwrapped and used by dedicated hardware.  For
+		more information about hardware-wrapped inline encryption keys,
+		see Documentation/block/inline-encryption.rst.
+
+
 What:		/sys/block/<disk>/queue/crypto/max_dun_bits
 Date:		February 2022
 Contact:	linux-block@vger.kernel.org
 Description:
 		[RO] This file shows the maximum length, in bits, of data unit
@@ -265,10 +275,18 @@ Contact:	linux-block@vger.kernel.org
 Description:
 		[RO] This file shows the number of keyslots the device has for
 		use with inline encryption.
 
 
+What:		/sys/block/<disk>/queue/crypto/raw_keys
+Contact:	linux-block@vger.kernel.org
+Description:
+		[RO] The presence of this file indicates that the device
+		supports raw inline encryption keys, i.e. keys that are managed
+		in raw, plaintext form in software.
+
+
 What:		/sys/block/<disk>/queue/dax
 Date:		June 2016
 Contact:	linux-block@vger.kernel.org
 Description:
 		[RO] This file indicates whether the device supports Direct
diff --git a/block/blk-crypto-sysfs.c b/block/blk-crypto-sysfs.c
index a304434489ba..e832f403f200 100644
--- a/block/blk-crypto-sysfs.c
+++ b/block/blk-crypto-sysfs.c
@@ -29,10 +29,17 @@ static struct blk_crypto_profile *kobj_to_crypto_profile(struct kobject *kobj)
 static struct blk_crypto_attr *attr_to_crypto_attr(struct attribute *attr)
 {
 	return container_of(attr, struct blk_crypto_attr, attr);
 }
 
+static ssize_t hw_wrapped_keys_show(struct blk_crypto_profile *profile,
+				    struct blk_crypto_attr *attr, char *page)
+{
+	/* Always show supported, since the file doesn't exist otherwise. */
+	return sysfs_emit(page, "supported\n");
+}
+
 static ssize_t max_dun_bits_show(struct blk_crypto_profile *profile,
 				 struct blk_crypto_attr *attr, char *page)
 {
 	return sysfs_emit(page, "%u\n", 8 * profile->max_dun_bytes_supported);
 }
@@ -41,24 +48,52 @@ static ssize_t num_keyslots_show(struct blk_crypto_profile *profile,
 				 struct blk_crypto_attr *attr, char *page)
 {
 	return sysfs_emit(page, "%u\n", profile->num_slots);
 }
 
+static ssize_t raw_keys_show(struct blk_crypto_profile *profile,
+			     struct blk_crypto_attr *attr, char *page)
+{
+	/* Always show supported, since the file doesn't exist otherwise. */
+	return sysfs_emit(page, "supported\n");
+}
+
 #define BLK_CRYPTO_RO_ATTR(_name) \
 	static struct blk_crypto_attr _name##_attr = __ATTR_RO(_name)
 
+BLK_CRYPTO_RO_ATTR(hw_wrapped_keys);
 BLK_CRYPTO_RO_ATTR(max_dun_bits);
 BLK_CRYPTO_RO_ATTR(num_keyslots);
+BLK_CRYPTO_RO_ATTR(raw_keys);
+
+static umode_t blk_crypto_is_visible(struct kobject *kobj,
+				     struct attribute *attr, int n)
+{
+	struct blk_crypto_profile *profile = kobj_to_crypto_profile(kobj);
+	struct blk_crypto_attr *a = attr_to_crypto_attr(attr);
+
+	if (a == &hw_wrapped_keys_attr &&
+	    !(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED))
+		return 0;
+	if (a == &raw_keys_attr &&
+	    !(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_RAW))
+		return 0;
+
+	return 0444;
+}
 
 static struct attribute *blk_crypto_attrs[] = {
+	&hw_wrapped_keys_attr.attr,
 	&max_dun_bits_attr.attr,
 	&num_keyslots_attr.attr,
+	&raw_keys_attr.attr,
 	NULL,
 };
 
 static const struct attribute_group blk_crypto_attr_group = {
 	.attrs = blk_crypto_attrs,
+	.is_visible = blk_crypto_is_visible,
 };
 
 /*
  * The encryption mode attributes.  To avoid hard-coding the list of encryption
  * modes, these are initialized at boot time by blk_crypto_sysfs_init().
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 12/15] blk-crypto: add ioctls to create and prepare hardware-wrapped keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (10 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 11/15] blk-crypto: show supported key types in sysfs Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 13/15] fscrypt: add support for " Eric Biggers
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

Until this point, the kernel can use hardware-wrapped keys to do
encryption if userspace provides one -- specifically a key in
ephemerally-wrapped form.  However, no generic way has been provided for
userspace to get such a key in the first place.

Getting such a key is a two-step process.  First, the key needs to be
imported from a raw key or generated by the hardware, producing a key in
long-term wrapped form.  This happens once in the whole lifetime of the
key.  Second, the long-term wrapped key needs to be converted into
ephemerally-wrapped form.  This happens each time the key is "unlocked".

In Android, these operations are supported in a generic way through
KeyMint, a userspace abstraction layer.  However, that method is
Android-specific and can't be used on other Linux systems, may rely on
proprietary libraries, and also misleads people into supporting KeyMint
features like rollback resistance that make sense for other KeyMint keys
but don't make sense for hardware-wrapped inline encryption keys.

Therefore, this patch provides a generic kernel interface for these
operations by introducing new block device ioctls:

- BLKCRYPTOIMPORTKEY: convert a raw key to long-term wrapped form.

- BLKCRYPTOGENERATEKEY: have the hardware generate a new key, then
  return it in long-term wrapped form.

- BLKCRYPTOPREPAREKEY: convert a key from long-term wrapped form to
  ephemerally-wrapped form.

These ioctls are implemented using new operations in blk_crypto_ll_ops.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 Documentation/block/inline-encryption.rst     |  32 ++++
 .../userspace-api/ioctl/ioctl-number.rst      |   2 +
 block/blk-crypto-internal.h                   |   9 ++
 block/blk-crypto-profile.c                    |  57 +++++++
 block/blk-crypto.c                            | 143 ++++++++++++++++++
 block/ioctl.c                                 |   5 +
 include/linux/blk-crypto-profile.h            |  53 +++++++
 include/linux/blk-crypto.h                    |   1 +
 include/uapi/linux/blk-crypto.h               |  44 ++++++
 include/uapi/linux/fs.h                       |   6 +-
 10 files changed, 348 insertions(+), 4 deletions(-)
 create mode 100644 include/uapi/linux/blk-crypto.h

diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
index f03bd5b090d8..004b230c80ad 100644
--- a/Documentation/block/inline-encryption.rst
+++ b/Documentation/block/inline-encryption.rst
@@ -490,10 +490,42 @@ when hardware support is available.  This works in the following way:
   keys being passed to these methods.
 
 blk-crypto-fallback doesn't support hardware-wrapped keys.  Therefore,
 hardware-wrapped keys can only be used with actual inline encryption hardware.
 
+All the above deals with hardware-wrapped keys in ephemerally-wrapped form only.
+To get such keys in the first place, new block device ioctls have been added to
+provide a generic interface to creating and preparing such keys:
+
+- ``BLKCRYPTOIMPORTKEY`` converts a raw key to long-term wrapped form.  It takes
+  in a pointer to a ``struct blk_crypto_import_key_arg``.  The caller must set
+  ``raw_key_ptr`` and ``raw_key_size`` to the pointer and size (in bytes) of the
+  raw key to import.  On success, ``BLKCRYPTOIMPORTKEY`` returns 0 and writes
+  the resulting long-term wrapped key blob to the buffer pointed to by
+  ``lt_key_ptr``, which is of maximum size ``lt_key_size``.  It also updates
+  ``lt_key_size`` to be the actual size of the key.  On failure, it returns -1
+  and sets errno.
+
+- ``BLKCRYPTOGENERATEKEY`` is like ``BLKCRYPTOIMPORTKEY``, but it has the
+  hardware generate the key instead of importing one.  It takes in a pointer to
+  a ``struct blk_crypto_generate_key_arg``.
+
+- ``BLKCRYPTOPREPAREKEY`` converts a key from long-term wrapped form to
+  ephemerally-wrapped form.  It takes in a pointer to a ``struct
+  blk_crypto_prepare_key_arg``.  The caller must set ``lt_key_ptr`` and
+  ``lt_key_size`` to the pointer and size (in bytes) of the long-term wrapped
+  key blob to convert.  On success, ``BLKCRYPTOPREPAREKEY`` returns 0 and writes
+  the resulting ephemerally-wrapped key blob to the buffer pointed to by
+  ``eph_key_ptr``, which is of maximum size ``eph_key_size``.  It also updates
+  ``eph_key_size`` to be the actual size of the key.  On failure, it returns -1
+  and sets errno.
+
+Userspace needs to use either ``BLKCRYPTOIMPORTKEY`` or ``BLKCRYPTOGENERATEKEY``
+once to create a key, and then ``BLKCRYPTOPREPAREKEY`` each time the key is
+unlocked and added to the kernel.  Note that these ioctls have no relevance for
+raw keys; they are only for hardware-wrapped keys.
+
 Testability
 -----------
 
 Both the hardware KDF and the inline encryption itself are well-defined
 algorithms that don't depend on any secrets other than the unwrapped key.
diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst
index 243f1f1b554a..b9d385e3c7bc 100644
--- a/Documentation/userspace-api/ioctl/ioctl-number.rst
+++ b/Documentation/userspace-api/ioctl/ioctl-number.rst
@@ -83,10 +83,12 @@ Code  Seq#    Include File                                           Comments
 0x10  00-0F  drivers/char/s390/vmcp.h
 0x10  10-1F  arch/s390/include/uapi/sclp_ctl.h
 0x10  20-2F  arch/s390/include/uapi/asm/hypfs.h
 0x12  all    linux/fs.h                                              BLK* ioctls
              linux/blkpg.h
+             linux/blkzoned.h
+             linux/blk-crypto.h
 0x15  all    linux/fs.h                                              FS_IOC_* ioctls
 0x1b  all                                                            InfiniBand Subsystem
                                                                      <http://infiniband.sourceforge.net/>
 0x20  all    drivers/cdrom/cm206.h
 0x22  all    scsi/sg.h
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index 1893df9a8f06..ccf6dff6ff6b 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -81,10 +81,13 @@ int __blk_crypto_evict_key(struct blk_crypto_profile *profile,
 			   const struct blk_crypto_key *key);
 
 bool __blk_crypto_cfg_supported(struct blk_crypto_profile *profile,
 				const struct blk_crypto_config *cfg);
 
+int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd,
+		     void __user *argp);
+
 #else /* CONFIG_BLK_INLINE_ENCRYPTION */
 
 static inline int blk_crypto_sysfs_register(struct gendisk *disk)
 {
 	return 0;
@@ -128,10 +131,16 @@ static inline bool blk_crypto_rq_is_encrypted(struct request *rq)
 static inline bool blk_crypto_rq_has_keyslot(struct request *rq)
 {
 	return false;
 }
 
+static inline int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd,
+				   void __user *argp)
+{
+	return -ENOTTY;
+}
+
 #endif /* CONFIG_BLK_INLINE_ENCRYPTION */
 
 void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
 static inline void bio_crypt_advance(struct bio *bio, unsigned int bytes)
 {
diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c
index 1b92276ed2fc..f6419502fcbe 100644
--- a/block/blk-crypto-profile.c
+++ b/block/blk-crypto-profile.c
@@ -500,10 +500,67 @@ int blk_crypto_derive_sw_secret(struct block_device *bdev,
 					       sw_secret);
 	blk_crypto_hw_exit(profile);
 	return err;
 }
 
+int blk_crypto_import_key(struct blk_crypto_profile *profile,
+			  const u8 *raw_key, size_t raw_key_size,
+			  u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	int ret;
+
+	if (!profile)
+		return -EOPNOTSUPP;
+	if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED))
+		return -EOPNOTSUPP;
+	if (!profile->ll_ops.import_key)
+		return -EOPNOTSUPP;
+	blk_crypto_hw_enter(profile);
+	ret = profile->ll_ops.import_key(profile, raw_key, raw_key_size,
+					 lt_key);
+	blk_crypto_hw_exit(profile);
+	return ret;
+}
+
+int blk_crypto_generate_key(struct blk_crypto_profile *profile,
+			    u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	int ret;
+
+	if (!profile)
+		return -EOPNOTSUPP;
+	if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED))
+		return -EOPNOTSUPP;
+	if (!profile->ll_ops.generate_key)
+		return -EOPNOTSUPP;
+
+	blk_crypto_hw_enter(profile);
+	ret = profile->ll_ops.generate_key(profile, lt_key);
+	blk_crypto_hw_exit(profile);
+	return ret;
+}
+
+int blk_crypto_prepare_key(struct blk_crypto_profile *profile,
+			   const u8 *lt_key, size_t lt_key_size,
+			   u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	int ret;
+
+	if (!profile)
+		return -EOPNOTSUPP;
+	if (!(profile->key_types_supported & BLK_CRYPTO_KEY_TYPE_HW_WRAPPED))
+		return -EOPNOTSUPP;
+	if (!profile->ll_ops.prepare_key)
+		return -EOPNOTSUPP;
+
+	blk_crypto_hw_enter(profile);
+	ret = profile->ll_ops.prepare_key(profile, lt_key, lt_key_size,
+					  eph_key);
+	blk_crypto_hw_exit(profile);
+	return ret;
+}
+
 /**
  * blk_crypto_intersect_capabilities() - restrict supported crypto capabilities
  *					 by child device
  * @parent: the crypto profile for the parent device
  * @child: the crypto profile for the child device, or NULL
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index b55b3d8bffa0..2f6e0294eddc 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -465,5 +465,148 @@ void blk_crypto_evict_key(struct block_device *bdev,
 	 */
 	if (err)
 		pr_warn_ratelimited("%pg: error %d evicting key\n", bdev, err);
 }
 EXPORT_SYMBOL_GPL(blk_crypto_evict_key);
+
+static int blk_crypto_ioctl_import_key(struct blk_crypto_profile *profile,
+				       void __user *argp)
+{
+	struct blk_crypto_import_key_arg arg;
+	u8 raw_key[BLK_CRYPTO_MAX_RAW_KEY_SIZE];
+	u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE];
+	int ret;
+
+	if (copy_from_user(&arg, argp, sizeof(arg)))
+		return -EFAULT;
+
+	if (memchr_inv(arg.reserved, 0, sizeof(arg.reserved)))
+		return -EINVAL;
+
+	if (arg.raw_key_size < 16 || arg.raw_key_size > sizeof(raw_key))
+		return -EINVAL;
+
+	if (copy_from_user(raw_key, u64_to_user_ptr(arg.raw_key_ptr),
+			   arg.raw_key_size)) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = blk_crypto_import_key(profile, raw_key, arg.raw_key_size, lt_key);
+	if (ret < 0)
+		goto out;
+	if (ret > arg.lt_key_size) {
+		ret = -EOVERFLOW;
+		goto out;
+	}
+	arg.lt_key_size = ret;
+	if (copy_to_user(u64_to_user_ptr(arg.lt_key_ptr), lt_key,
+			 arg.lt_key_size) ||
+	    copy_to_user(argp, &arg, sizeof(arg))) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = 0;
+
+out:
+	memzero_explicit(raw_key, sizeof(raw_key));
+	memzero_explicit(lt_key, sizeof(lt_key));
+	return ret;
+}
+
+static int blk_crypto_ioctl_generate_key(struct blk_crypto_profile *profile,
+					 void __user *argp)
+{
+	struct blk_crypto_generate_key_arg arg;
+	u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE];
+	int ret;
+
+	if (copy_from_user(&arg, argp, sizeof(arg)))
+		return -EFAULT;
+
+	if (memchr_inv(arg.reserved, 0, sizeof(arg.reserved)))
+		return -EINVAL;
+
+	ret = blk_crypto_generate_key(profile, lt_key);
+	if (ret < 0)
+		goto out;
+	if (ret > arg.lt_key_size) {
+		ret = -EOVERFLOW;
+		goto out;
+	}
+	arg.lt_key_size = ret;
+	if (copy_to_user(u64_to_user_ptr(arg.lt_key_ptr), lt_key,
+			 arg.lt_key_size) ||
+	    copy_to_user(argp, &arg, sizeof(arg))) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = 0;
+
+out:
+	memzero_explicit(lt_key, sizeof(lt_key));
+	return ret;
+}
+
+static int blk_crypto_ioctl_prepare_key(struct blk_crypto_profile *profile,
+					void __user *argp)
+{
+	struct blk_crypto_prepare_key_arg arg;
+	u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE];
+	u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE];
+	int ret;
+
+	if (copy_from_user(&arg, argp, sizeof(arg)))
+		return -EFAULT;
+
+	if (memchr_inv(arg.reserved, 0, sizeof(arg.reserved)))
+		return -EINVAL;
+
+	if (arg.lt_key_size > sizeof(lt_key))
+		return -EINVAL;
+
+	if (copy_from_user(lt_key, u64_to_user_ptr(arg.lt_key_ptr),
+			   arg.lt_key_size)) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = blk_crypto_prepare_key(profile, lt_key, arg.lt_key_size, eph_key);
+	if (ret < 0)
+		goto out;
+	if (ret > arg.eph_key_size) {
+		ret = -EOVERFLOW;
+		goto out;
+	}
+	arg.eph_key_size = ret;
+	if (copy_to_user(u64_to_user_ptr(arg.eph_key_ptr), eph_key,
+			 arg.eph_key_size) ||
+	    copy_to_user(argp, &arg, sizeof(arg))) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = 0;
+
+out:
+	memzero_explicit(lt_key, sizeof(lt_key));
+	memzero_explicit(eph_key, sizeof(eph_key));
+	return ret;
+}
+
+int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd,
+		     void __user *argp)
+{
+	struct blk_crypto_profile *profile =
+		bdev_get_queue(bdev)->crypto_profile;
+
+	if (!profile)
+		return -EOPNOTSUPP;
+
+	switch (cmd) {
+	case BLKCRYPTOIMPORTKEY:
+		return blk_crypto_ioctl_import_key(profile, argp);
+	case BLKCRYPTOGENERATEKEY:
+		return blk_crypto_ioctl_generate_key(profile, argp);
+	case BLKCRYPTOPREPAREKEY:
+		return blk_crypto_ioctl_prepare_key(profile, argp);
+	default:
+		return -ENOTTY;
+	}
+}
diff --git a/block/ioctl.c b/block/ioctl.c
index 6554b728bae6..faa40f383e27 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -13,10 +13,11 @@
 #include <linux/uaccess.h>
 #include <linux/pagemap.h>
 #include <linux/io_uring/cmd.h>
 #include <uapi/linux/blkdev.h>
 #include "blk.h"
+#include "blk-crypto-internal.h"
 
 static int blkpg_do_ioctl(struct block_device *bdev,
 			  struct blkpg_partition __user *upart, int op)
 {
 	struct gendisk *disk = bdev->bd_disk;
@@ -618,10 +619,14 @@ static int blkdev_common_ioctl(struct block_device *bdev, blk_mode_t mode,
 				mode | BLK_OPEN_STRICT_SCAN);
 	case BLKTRACESTART:
 	case BLKTRACESTOP:
 	case BLKTRACETEARDOWN:
 		return blk_trace_ioctl(bdev, cmd, argp);
+	case BLKCRYPTOIMPORTKEY:
+	case BLKCRYPTOGENERATEKEY:
+	case BLKCRYPTOPREPAREKEY:
+		return blk_crypto_ioctl(bdev, cmd, argp);
 	case IOC_PR_REGISTER:
 		return blkdev_pr_register(bdev, mode, argp);
 	case IOC_PR_RESERVE:
 		return blkdev_pr_reserve(bdev, mode, argp);
 	case IOC_PR_RELEASE:
diff --git a/include/linux/blk-crypto-profile.h b/include/linux/blk-crypto-profile.h
index 7764b4f7b45b..a719a0aea122 100644
--- a/include/linux/blk-crypto-profile.h
+++ b/include/linux/blk-crypto-profile.h
@@ -69,10 +69,52 @@ struct blk_crypto_ll_ops {
 	 * -errno code on other errors.
 	 */
 	int (*derive_sw_secret)(struct blk_crypto_profile *profile,
 				const u8 *eph_key, size_t eph_key_size,
 				u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]);
+
+	/**
+	 * @import_key: Create a hardware-wrapped key by importing a raw key.
+	 *
+	 * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED
+	 * is supported.
+	 *
+	 * On success, must write the new key in long-term wrapped form to
+	 * @lt_key and return its size in bytes.  On failure, must return a
+	 * -errno value.
+	 */
+	int (*import_key)(struct blk_crypto_profile *profile,
+			  const u8 *raw_key, size_t raw_key_size,
+			  u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+
+	/**
+	 * @generate_key: Generate a hardware-wrapped key.
+	 *
+	 * This only needs to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED
+	 * is supported.
+	 *
+	 * On success, must write the new key in long-term wrapped form to
+	 * @lt_key and return its size in bytes.  On failure, must return a
+	 * -errno value.
+	 */
+	int (*generate_key)(struct blk_crypto_profile *profile,
+			    u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+
+	/**
+	 * @prepare_key: Prepare a hardware-wrapped key to be used.
+	 *
+	 * Prepare a hardware-wrapped key to be used by converting it from
+	 * long-term wrapped form to ephemerally-wrapped form.  This only needs
+	 * to be implemented if BLK_CRYPTO_KEY_TYPE_HW_WRAPPED is supported.
+	 *
+	 * On success, must write the key in ephemerally-wrapped form to
+	 * @eph_key and return its size in bytes.  On failure, must return a
+	 * -errno value.
+	 */
+	int (*prepare_key)(struct blk_crypto_profile *profile,
+			   const u8 *lt_key, size_t lt_key_size,
+			   u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
 };
 
 /**
  * struct blk_crypto_profile - inline encryption profile for a device
  *
@@ -161,10 +203,21 @@ unsigned int blk_crypto_keyslot_index(struct blk_crypto_keyslot *slot);
 
 void blk_crypto_reprogram_all_keys(struct blk_crypto_profile *profile);
 
 void blk_crypto_profile_destroy(struct blk_crypto_profile *profile);
 
+int blk_crypto_import_key(struct blk_crypto_profile *profile,
+			  const u8 *raw_key, size_t raw_key_size,
+			  u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+
+int blk_crypto_generate_key(struct blk_crypto_profile *profile,
+			    u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+
+int blk_crypto_prepare_key(struct blk_crypto_profile *profile,
+			   const u8 *lt_key, size_t lt_key_size,
+			   u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+
 void blk_crypto_intersect_capabilities(struct blk_crypto_profile *parent,
 				       const struct blk_crypto_profile *child);
 
 bool blk_crypto_has_capabilities(const struct blk_crypto_profile *target,
 				 const struct blk_crypto_profile *reference);
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index 0e63287e2175..c1ef8c3cea64 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -6,10 +6,11 @@
 #ifndef __LINUX_BLK_CRYPTO_H
 #define __LINUX_BLK_CRYPTO_H
 
 #include <linux/minmax.h>
 #include <linux/types.h>
+#include <uapi/linux/blk-crypto.h>
 
 enum blk_crypto_mode_num {
 	BLK_ENCRYPTION_MODE_INVALID,
 	BLK_ENCRYPTION_MODE_AES_256_XTS,
 	BLK_ENCRYPTION_MODE_AES_128_CBC_ESSIV,
diff --git a/include/uapi/linux/blk-crypto.h b/include/uapi/linux/blk-crypto.h
new file mode 100644
index 000000000000..97302c6eb6af
--- /dev/null
+++ b/include/uapi/linux/blk-crypto.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_BLK_CRYPTO_H
+#define _UAPI_LINUX_BLK_CRYPTO_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+struct blk_crypto_import_key_arg {
+	/* Raw key (input) */
+	__u64 raw_key_ptr;
+	__u64 raw_key_size;
+	/* Long-term wrapped key blob (output) */
+	__u64 lt_key_ptr;
+	__u64 lt_key_size;
+	__u64 reserved[4];
+};
+
+struct blk_crypto_generate_key_arg {
+	/* Long-term wrapped key blob (output) */
+	__u64 lt_key_ptr;
+	__u64 lt_key_size;
+	__u64 reserved[4];
+};
+
+struct blk_crypto_prepare_key_arg {
+	/* Long-term wrapped key blob (input) */
+	__u64 lt_key_ptr;
+	__u64 lt_key_size;
+	/* Ephemerally-wrapped key blob (output) */
+	__u64 eph_key_ptr;
+	__u64 eph_key_size;
+	__u64 reserved[4];
+};
+
+/*
+ * These ioctls share the block device ioctl space; see uapi/linux/fs.h.
+ * 140-141 are reserved for future blk-crypto ioctls; any more than that would
+ * require an additional allocation from the block device ioctl space.
+ */
+#define BLKCRYPTOIMPORTKEY _IOWR(0x12, 137, struct blk_crypto_import_key_arg)
+#define BLKCRYPTOGENERATEKEY _IOWR(0x12, 138, struct blk_crypto_generate_key_arg)
+#define BLKCRYPTOPREPAREKEY _IOWR(0x12, 139, struct blk_crypto_prepare_key_arg)
+
+#endif /* _UAPI_LINUX_BLK_CRYPTO_H */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 9070ef19f0a3..ba5bc5369b3c 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -210,14 +210,12 @@ struct fsxattr {
 #define BLKDISCARDZEROES _IO(0x12,124)
 #define BLKSECDISCARD _IO(0x12,125)
 #define BLKROTATIONAL _IO(0x12,126)
 #define BLKZEROOUT _IO(0x12,127)
 #define BLKGETDISKSEQ _IOR(0x12,128,__u64)
-/*
- * A jump here: 130-136 are reserved for zoned block devices
- * (see uapi/linux/blkzoned.h)
- */
+/* 130-136 are used by zoned block device ioctls (uapi/linux/blkzoned.h) */
+/* 137-141 are used by blk-crypto ioctls (uapi/linux/blk-crypto.h) */
 
 #define BMAP_IOCTL 1		/* obsolete - kept for compatibility */
 #define FIBMAP	   _IO(0x00,1)	/* bmap access */
 #define FIGETBSZ   _IO(0x00,2)	/* get the block size used for bmap */
 #define FIFREEZE	_IOWR('X', 119, int)	/* Freeze */
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 13/15] fscrypt: add support for hardware-wrapped keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (11 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 12/15] blk-crypto: add ioctls to create and prepare hardware-wrapped keys Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 14/15] soc: qcom: ice: add HWKM support to the ICE driver Eric Biggers
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

Add support for hardware-wrapped keys to fscrypt.  Such keys are
protected from certain attacks, such as cold boot attacks.  For more
information, see the "Hardware-wrapped keys" section of
Documentation/block/inline-encryption.rst.

To support hardware-wrapped keys in fscrypt, we allow the fscrypt master
keys to be hardware-wrapped.  File contents encryption is done by
passing the wrapped key to the inline encryption hardware via
blk-crypto.  Other fscrypt operations such as filenames encryption
continue to be done by the kernel, using the "software secret" which the
hardware derives.  For more information, see the documentation which
this patch adds to Documentation/filesystems/fscrypt.rst.

Note that this feature doesn't require any filesystem-specific changes.
However it does depend on inline encryption support, and thus currently
it is only applicable to ext4 and f2fs.

The version of this feature introduced by this patch is mostly
equivalent to the version that has existed downstream in the Android
kernels since 2020.  However, a couple fixes are included.  First, the
flags field in struct fscrypt_add_key_arg is now placed in the proper
location.  Second, an option is now provided to derive key identifiers
for HW-wrapped keys using a distinct HKDF context byte; this fixes a bug
where a raw key could have the same identifier as a HW-wrapped key.

This patch has been heavily rewritten from the original version by
Gaurav Kashyap <quic_gaurkash@quicinc.com> and
Barani Muthukumaran <bmuthuku@codeaurora.org>.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 Documentation/filesystems/fscrypt.rst | 201 ++++++++++++++++++++------
 fs/crypto/fscrypt_private.h           |  75 ++++++++--
 fs/crypto/hkdf.c                      |   4 +-
 fs/crypto/inline_crypt.c              |  44 +++++-
 fs/crypto/keyring.c                   | 157 ++++++++++++++------
 fs/crypto/keysetup.c                  |  63 +++++++-
 fs/crypto/keysetup_v1.c               |   4 +-
 include/uapi/linux/fscrypt.h          |   7 +-
 8 files changed, 444 insertions(+), 111 deletions(-)

diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
index 04eaab01314b..eb44830173fc 100644
--- a/Documentation/filesystems/fscrypt.rst
+++ b/Documentation/filesystems/fscrypt.rst
@@ -68,11 +68,11 @@ an authorized user later accessing the filesystem.
 
 Online attacks
 --------------
 
 fscrypt (and storage encryption in general) can only provide limited
-protection, if any at all, against online attacks.  In detail:
+protection against online attacks.  In detail:
 
 Side-channel attacks
 ~~~~~~~~~~~~~~~~~~~~
 
 fscrypt is only resistant to side-channel attacks, such as timing or
@@ -97,20 +97,27 @@ system itself, is *not* protected by the mathematical properties of
 encryption but rather only by the correctness of the kernel.
 Therefore, any encryption-specific access control checks would merely
 be enforced by kernel *code* and therefore would be largely redundant
 with the wide variety of access control mechanisms already available.)
 
-Kernel memory compromise
-~~~~~~~~~~~~~~~~~~~~~~~~
+Read-only kernel memory compromise
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Unless `hardware-wrapped keys`_ are used, an attacker who gains the
+ability to read from arbitrary kernel memory, e.g. by mounting a
+physical attack or by exploiting a kernel security vulnerability, can
+compromise all fscrypt keys that are currently in-use.  This also
+extends to cold boot attacks; if the system is suddenly powered off,
+keys the system was using may remain in memory for a short time.
 
-An attacker who compromises the system enough to read from arbitrary
-memory, e.g. by mounting a physical attack or by exploiting a kernel
-security vulnerability, can compromise all encryption keys that are
-currently in use.
+However, if hardware-wrapped keys are used, then the fscrypt master
+keys and file contents encryption keys (but not other types of fscrypt
+subkeys such as filenames encryption keys) are protected from
+compromises of arbitrary kernel memory.
 
-However, fscrypt allows encryption keys to be removed from the kernel,
-which may protect them from later compromise.
+In addition, fscrypt allows encryption keys to be removed from the
+kernel, which may protect them from later compromise.
 
 In more detail, the FS_IOC_REMOVE_ENCRYPTION_KEY ioctl (or the
 FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS ioctl) can wipe a master
 encryption key from kernel memory.  If it does so, it will also try to
 evict all cached inodes which had been "unlocked" using the key,
@@ -143,10 +150,28 @@ However, these ioctls have some limitations:
 
 - Secret keys might still exist in CPU registers, in crypto
   accelerator hardware (if used by the crypto API to implement any of
   the algorithms), or in other places not explicitly considered here.
 
+Full system compromise
+~~~~~~~~~~~~~~~~~~~~~~
+
+An attacker who gains "root" access and/or the ability to execute
+arbitrary kernel code can freely exfiltrate data that is protected by
+any in-use fscrypt keys.  Thus, usually fscrypt provides no meaningful
+protection in this scenario.  (Data that is protected by a key that is
+absent throughout the entire attack remains protected, modulo the
+limitations of key removal mentioned above in the case where the key
+was removed prior to the attack.)
+
+However, if `hardware-wrapped keys`_ are used, such attackers will be
+unable to exfiltrate the master keys or file contents keys in a form
+that will be usable after the system is powered off.  This may be
+useful if the attacker is significantly time-limited and/or
+bandwidth-limited, so they can only exfiltrate some data and need to
+rely on a later offline attack to exfiltrate the rest of it.
+
 Limitations of v1 policies
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 v1 encryption policies have some weaknesses with respect to online
 attacks:
@@ -169,10 +194,14 @@ this reason among others, it is recommended to use v2 encryption
 policies on all new encrypted directories.
 
 Key hierarchy
 =============
 
+Note: this section assumes the use of raw keys rather than
+hardware-wrapped keys.  The use of hardware-wrapped keys modifies the
+key hierarchy slightly.  For details, see `Hardware-wrapped keys`_.
+
 Master Keys
 -----------
 
 Each encrypted directory tree is protected by a *master key*.  Master
 keys can be up to 64 bytes long, and must be at least as long as the
@@ -834,11 +863,14 @@ a pointer to struct fscrypt_add_key_arg, defined as follows::
 
     struct fscrypt_add_key_arg {
             struct fscrypt_key_specifier key_spec;
             __u32 raw_size;
             __u32 key_id;
-            __u32 __reserved[8];
+    #define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0      0x00000001
+    #define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1      0x00000002
+            __u32 flags;
+            __u32 __reserved[7];
             __u8 raw[];
     };
 
     #define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR        1
     #define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER        2
@@ -853,11 +885,11 @@ a pointer to struct fscrypt_add_key_arg, defined as follows::
             } u;
     };
 
     struct fscrypt_provisioning_key_payload {
             __u32 type;
-            __u32 __reserved;
+            __u32 flags;
             __u8 raw[];
     };
 
 struct fscrypt_add_key_arg must be zeroed, then initialized
 as follows:
@@ -881,28 +913,41 @@ as follows:
 
 - ``raw_size`` must be the size of the ``raw`` key provided, in bytes.
   Alternatively, if ``key_id`` is nonzero, this field must be 0, since
   in that case the size is implied by the specified Linux keyring key.
 
-- ``key_id`` is 0 if the raw key is given directly in the ``raw``
-  field.  Otherwise ``key_id`` is the ID of a Linux keyring key of
-  type "fscrypt-provisioning" whose payload is
-  struct fscrypt_provisioning_key_payload whose ``raw`` field contains
-  the raw key and whose ``type`` field matches ``key_spec.type``.
-  Since ``raw`` is variable-length, the total size of this key's
-  payload must be ``sizeof(struct fscrypt_provisioning_key_payload)``
-  plus the raw key size.  The process must have Search permission on
-  this key.
-
-  Most users should leave this 0 and specify the raw key directly.
-  The support for specifying a Linux keyring key is intended mainly to
+- ``key_id`` is 0 if the key is given directly in the ``raw`` field.
+  Otherwise ``key_id`` is the ID of a Linux keyring key of type
+  "fscrypt-provisioning" whose payload is struct
+  fscrypt_provisioning_key_payload whose ``raw`` field contains the
+  key, whose ``type`` field matches ``key_spec.type``, and whose
+  ``flags`` field matches ``flags``.  Since ``raw`` is
+  variable-length, the total size of this key's payload must be
+  ``sizeof(struct fscrypt_provisioning_key_payload)`` plus the number
+  of key bytes.  The process must have Search permission on this key.
+
+  Most users should leave this 0 and specify the key directly.  The
+  support for specifying a Linux keyring key is intended mainly to
   allow re-adding keys after a filesystem is unmounted and re-mounted,
-  without having to store the raw keys in userspace memory.
+  without having to store the keys in userspace memory.
+
+- ``flags`` contains optional flags from ``<linux/fscrypt.h>``:
+
+  - FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0: Similar to
+    FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1 but selects an old on-disk
+    format.  Do not use when encrypting new directories.  This flag
+    can only be used by privileged users.
+
+  - FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1: This denotes that the key is a
+    hardware-wrapped key.  See `Hardware-wrapped keys`_.  This flag
+    can't be used if FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR is used.
 
 - ``raw`` is a variable-length field which must contain the actual
   key, ``raw_size`` bytes long.  Alternatively, if ``key_id`` is
-  nonzero, then this field is unused.
+  nonzero, then this field is unused.  Note that despite being named
+  ``raw``, if one of the FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_* flags is
+  specified then it will contain a wrapped key, not a raw key.
 
 For v2 policy keys, the kernel keeps track of which user (identified
 by effective user ID) added the key, and only allows the key to be
 removed by that user --- or by "root", if they use
 `FS_IOC_REMOVE_ENCRYPTION_KEY_ALL_USERS`_.
@@ -910,34 +955,39 @@ removed by that user --- or by "root", if they use
 However, if another user has added the key, it may be desirable to
 prevent that other user from unexpectedly removing it.  Therefore,
 FS_IOC_ADD_ENCRYPTION_KEY may also be used to add a v2 policy key
 *again*, even if it's already added by other user(s).  In this case,
 FS_IOC_ADD_ENCRYPTION_KEY will just install a claim to the key for the
-current user, rather than actually add the key again (but the raw key
-must still be provided, as a proof of knowledge).
+current user, rather than actually add the key again (but the key must
+still be provided, as a proof of knowledge).
 
 FS_IOC_ADD_ENCRYPTION_KEY returns 0 if either the key or a claim to
 the key was either added or already exists.
 
 FS_IOC_ADD_ENCRYPTION_KEY can fail with the following errors:
 
-- ``EACCES``: FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR was specified, but the
-  caller does not have the CAP_SYS_ADMIN capability in the initial
-  user namespace; or the raw key was specified by Linux key ID but the
-  process lacks Search permission on the key.
+- ``EACCES``: FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR or
+  FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0 was specified, but the caller
+  does not have the CAP_SYS_ADMIN capability in the initial user
+  namespace; or the key was specified by Linux key ID but the process
+  lacks Search permission on the key.
+- ``EBADMSG``: One of the FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_* flags was
+  specified, but the key isn't a valid hardware-wrapped key
 - ``EDQUOT``: the key quota for this user would be exceeded by adding
   the key
 - ``EINVAL``: invalid key size or key specifier type, or reserved bits
   were set
-- ``EKEYREJECTED``: the raw key was specified by Linux key ID, but the
-  key has the wrong type
-- ``ENOKEY``: the raw key was specified by Linux key ID, but no key
-  exists with that ID
+- ``EKEYREJECTED``: the key was specified by Linux key ID, but the key
+  has the wrong type
+- ``ENOKEY``: the key was specified by Linux key ID, but no key exists
+  with that ID
 - ``ENOTTY``: this type of filesystem does not implement encryption
 - ``EOPNOTSUPP``: the kernel was not configured with encryption
   support for this filesystem, or the filesystem superblock has not
-  had encryption enabled on it
+  had encryption enabled on it, or one of the
+  FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_* flags was specified but the
+  filesystem and/or the hardware doesn't support hardware-wrapped keys
 
 Legacy method
 ~~~~~~~~~~~~~
 
 For v1 encryption policies, a master encryption key can also be
@@ -996,13 +1046,12 @@ These two ioctls differ only in cases where v2 policy keys are added
 or removed by non-root users.
 
 These ioctls don't work on keys that were added via the legacy
 process-subscribed keyrings mechanism.
 
-Before using these ioctls, read the `Kernel memory compromise`_
-section for a discussion of the security goals and limitations of
-these ioctls.
+Before using these ioctls, read the `Online attacks`_ section for a
+discussion of the security goals and limitations of these ioctls.
 
 FS_IOC_REMOVE_ENCRYPTION_KEY
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The FS_IOC_REMOVE_ENCRYPTION_KEY ioctl removes a claim to a master
@@ -1318,19 +1367,89 @@ encryption when possible; it doesn't force its use.  fscrypt will
 still fall back to using the kernel crypto API on files where the
 inline encryption hardware doesn't have the needed crypto capabilities
 (e.g. support for the needed encryption algorithm and data unit size)
 and where blk-crypto-fallback is unusable.  (For blk-crypto-fallback
 to be usable, it must be enabled in the kernel configuration with
-CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y.)
+CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y, and the file must be
+protected by a raw key rather than a hardware-wrapped key.)
 
 Currently fscrypt always uses the filesystem block size (which is
 usually 4096 bytes) as the data unit size.  Therefore, it can only use
 inline encryption hardware that supports that data unit size.
 
 Inline encryption doesn't affect the ciphertext or other aspects of
 the on-disk format, so users may freely switch back and forth between
-using "inlinecrypt" and not using "inlinecrypt".
+using "inlinecrypt" and not using "inlinecrypt".  An exception is that
+files that are protected by a hardware-wrapped key can only be
+encrypted/decrypted by the inline encryption hardware and therefore
+can only be accessed when the "inlinecrypt" mount option is used.  For
+more information about hardware-wrapped keys, see below.
+
+Hardware-wrapped keys
+---------------------
+
+fscrypt supports using *hardware-wrapped keys* when the inline
+encryption hardware supports it.  Such keys are only present in kernel
+memory in wrapped (encrypted) form; they can only be unwrapped
+(decrypted) by the inline encryption hardware and are temporally bound
+to the current boot.  This prevents the keys from being compromised if
+kernel memory is leaked.  This is done without limiting the number of
+keys that can be used and while still allowing the execution of
+cryptographic tasks that are tied to the same key but can't use inline
+encryption hardware, e.g. filenames encryption.
+
+Note that hardware-wrapped keys aren't specific to fscrypt; they are a
+block layer feature (part of *blk-crypto*).  For more details about
+hardware-wrapped keys, see the block layer documentation at
+:ref:`Documentation/block/inline-encryption.rst
+<hardware_wrapped_keys>`.  Below, we just focus on the details of how
+fscrypt can use hardware-wrapped keys.
+
+fscrypt supports hardware-wrapped keys by allowing the fscrypt master
+keys to be hardware-wrapped keys as an alternative to raw keys.  To
+add a hardware-wrapped key with `FS_IOC_ADD_ENCRYPTION_KEY`_,
+userspace must specify one of the FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_*
+flags in the ``flags`` field of struct fscrypt_add_key_arg and also in
+the ``flags`` field of struct fscrypt_provisioning_key_payload when
+applicable.  The key must be in ephemerally-wrapped form, not
+long-term wrapped form.
+
+Some limitations apply.  First, files protected by a hardware-wrapped
+key are tied to the system's inline encryption hardware.  Therefore
+they can only be accessed when the "inlinecrypt" mount option is used,
+and they can't be included in portable filesystem images.  Second,
+currently the hardware-wrapped key support is only compatible with
+`IV_INO_LBLK_64 policies`_ and `IV_INO_LBLK_32 policies`_, as it
+assumes that there is just one file contents encryption key per
+fscrypt master key rather than one per file.  Future work may address
+this limitation by passing per-file nonces down the storage stack to
+allow the hardware to derive per-file keys.
+
+Implementation-wise, to encrypt/decrypt the contents of files that are
+protected by a hardware-wrapped key, fscrypt uses blk-crypto,
+attaching the hardware-wrapped key to the bio crypt contexts.  As is
+the case with raw keys, the block layer will program the key into a
+keyslot when it isn't already in one.  However, when programming a
+hardware-wrapped key, the hardware doesn't program the given key
+directly into a keyslot but rather unwraps it (using the hardware's
+ephemeral wrapping key) and derives the inline encryption key from it.
+The inline encryption key is the key that actually gets programmed
+into a keyslot, and it is never exposed to software.
+
+However, fscrypt doesn't just do file contents encryption; it also
+uses its master keys to derive filenames encryption keys, key
+identifiers, and sometimes some more obscure types of subkeys such as
+dirhash keys.  So even with file contents encryption out of the
+picture, fscrypt still needs a raw key to work with.  To get such a
+key from a hardware-wrapped key, fscrypt asks the inline encryption
+hardware to derive a cryptographically isolated "software secret" from
+the hardware-wrapped key.  fscrypt uses this "software secret" to key
+its KDF to derive all subkeys other than file contents keys.
+
+Note that this implies that the hardware-wrapped key feature only
+protects the file contents encryption keys.  It doesn't protect other
+fscrypt subkeys such as filenames encryption keys.
 
 Direct I/O support
 ==================
 
 For direct I/O on an encrypted file to work, the following conditions
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 8371e4e1f596..c1d92074b65c 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -10,10 +10,11 @@
 
 #ifndef _FSCRYPT_PRIVATE_H
 #define _FSCRYPT_PRIVATE_H
 
 #include <linux/fscrypt.h>
+#include <linux/minmax.h>
 #include <linux/siphash.h>
 #include <crypto/hash.h>
 #include <linux/blk-crypto.h>
 
 #define CONST_STRLEN(str)	(sizeof(str) - 1)
@@ -25,10 +26,27 @@
  * if ciphers with a 256-bit security strength are used.  This is just the
  * absolute minimum, which applies when only 128-bit encryption is used.
  */
 #define FSCRYPT_MIN_KEY_SIZE	16
 
+/* Maximum size of a raw fscrypt master key */
+#define FSCRYPT_MAX_RAW_KEY_SIZE	64
+
+/* Maximum size of a hardware-wrapped fscrypt master key */
+#define FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE	BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE
+
+/* Maximum size of an fscrypt master key across both key types */
+#define FSCRYPT_MAX_ANY_KEY_SIZE \
+	MAX(FSCRYPT_MAX_RAW_KEY_SIZE, FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE)
+
+/*
+ * FSCRYPT_MAX_KEY_SIZE is defined in the UAPI header, but the addition of
+ * hardware-wrapped keys has made it misleading as it's only for raw keys.
+ * Don't use it in kernel code; use one of the above constants instead.
+ */
+#undef FSCRYPT_MAX_KEY_SIZE
+
 #define FSCRYPT_CONTEXT_V1	1
 #define FSCRYPT_CONTEXT_V2	2
 
 /* Keep this in sync with include/uapi/linux/fscrypt.h */
 #define FSCRYPT_MODE_MAX	FSCRYPT_MODE_AES_256_HCTR2
@@ -358,41 +376,49 @@ int fscrypt_init_hkdf(struct fscrypt_hkdf *hkdf, const u8 *master_key,
  * the first byte of the HKDF application-specific info string to guarantee that
  * info strings are never repeated between contexts.  This ensures that all HKDF
  * outputs are unique and cryptographically isolated, i.e. knowledge of one
  * output doesn't reveal another.
  */
-#define HKDF_CONTEXT_KEY_IDENTIFIER	1 /* info=<empty>		*/
+#define HKDF_CONTEXT_KEY_IDENTIFIER_FOR_RAW_KEY	1 /* info=<empty>	*/
 #define HKDF_CONTEXT_PER_FILE_ENC_KEY	2 /* info=file_nonce		*/
 #define HKDF_CONTEXT_DIRECT_KEY		3 /* info=mode_num		*/
 #define HKDF_CONTEXT_IV_INO_LBLK_64_KEY	4 /* info=mode_num||fs_uuid	*/
 #define HKDF_CONTEXT_DIRHASH_KEY	5 /* info=file_nonce		*/
 #define HKDF_CONTEXT_IV_INO_LBLK_32_KEY	6 /* info=mode_num||fs_uuid	*/
 #define HKDF_CONTEXT_INODE_HASH_KEY	7 /* info=<empty>		*/
+#define HKDF_CONTEXT_KEY_IDENTIFIER_FOR_HW_WRAPPED_KEY \
+					8 /* info=<empty>		*/
 
 int fscrypt_hkdf_expand(const struct fscrypt_hkdf *hkdf, u8 context,
 			const u8 *info, unsigned int infolen,
 			u8 *okm, unsigned int okmlen);
 
 void fscrypt_destroy_hkdf(struct fscrypt_hkdf *hkdf);
 
 /* inline_crypt.c */
 #ifdef CONFIG_FS_ENCRYPTION_INLINE_CRYPT
-int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci);
+int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci,
+				   bool is_hw_wrapped_key);
 
 static inline bool
 fscrypt_using_inline_encryption(const struct fscrypt_inode_info *ci)
 {
 	return ci->ci_inlinecrypt;
 }
 
 int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
-				     const u8 *raw_key,
+				     const u8 *key_bytes, size_t key_size,
+				     bool is_hw_wrapped,
 				     const struct fscrypt_inode_info *ci);
 
 void fscrypt_destroy_inline_crypt_key(struct super_block *sb,
 				      struct fscrypt_prepared_key *prep_key);
 
+int fscrypt_derive_sw_secret(struct super_block *sb,
+			     const u8 *wrapped_key, size_t wrapped_key_size,
+			     u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]);
+
 /*
  * Check whether the crypto transform or blk-crypto key has been allocated in
  * @prep_key, depending on which encryption implementation the file will use.
  */
 static inline bool
@@ -412,11 +438,12 @@ fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
 	return smp_load_acquire(&prep_key->tfm) != NULL;
 }
 
 #else /* CONFIG_FS_ENCRYPTION_INLINE_CRYPT */
 
-static inline int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci)
+static inline int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci,
+						 bool is_hw_wrapped_key)
 {
 	return 0;
 }
 
 static inline bool
@@ -425,11 +452,12 @@ fscrypt_using_inline_encryption(const struct fscrypt_inode_info *ci)
 	return false;
 }
 
 static inline int
 fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
-				 const u8 *raw_key,
+				 const u8 *key_bytes, size_t key_size,
+				 bool is_hw_wrapped,
 				 const struct fscrypt_inode_info *ci)
 {
 	WARN_ON_ONCE(1);
 	return -EOPNOTSUPP;
 }
@@ -438,10 +466,19 @@ static inline void
 fscrypt_destroy_inline_crypt_key(struct super_block *sb,
 				 struct fscrypt_prepared_key *prep_key)
 {
 }
 
+static inline int
+fscrypt_derive_sw_secret(struct super_block *sb,
+			 const u8 *wrapped_key, size_t wrapped_key_size,
+			 u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
+{
+	fscrypt_warn(NULL, "kernel doesn't support hardware-wrapped keys");
+	return -EOPNOTSUPP;
+}
+
 static inline bool
 fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
 			const struct fscrypt_inode_info *ci)
 {
 	return smp_load_acquire(&prep_key->tfm) != NULL;
@@ -454,24 +491,42 @@ fscrypt_is_key_prepared(struct fscrypt_prepared_key *prep_key,
  * fscrypt_master_key_secret - secret key material of an in-use master key
  */
 struct fscrypt_master_key_secret {
 
 	/*
-	 * For v2 policy keys: HKDF context keyed by this master key.
-	 * For v1 policy keys: not set (hkdf.hmac_tfm == NULL).
+	 * The KDF with which subkeys of this key can be derived.
+	 *
+	 * For v1 policy keys, this isn't applicable and won't be set.
+	 * Otherwise, this KDF will be keyed by this master key if
+	 * ->is_hw_wrapped=false, or by the "software secret" that hardware
+	 * derived from this master key if ->is_hw_wrapped=true.
 	 */
 	struct fscrypt_hkdf	hkdf;
 
 	/*
-	 * Size of the raw key in bytes.  This remains set even if ->raw was
+	 * True if this key is a hardware-wrapped key; false if this key is a
+	 * raw key (i.e. a "software key").  For v1 policy keys this will always
+	 * be false, as v1 policy support is a legacy feature which doesn't
+	 * support newer functionality such as hardware-wrapped keys.
+	 */
+	bool			is_hw_wrapped;
+
+	/*
+	 * Size of the key in bytes.  This remains set even if ->bytes was
 	 * zeroized due to no longer being needed.  I.e. we still remember the
 	 * size of the key even if we don't need to remember the key itself.
 	 */
 	u32			size;
 
-	/* For v1 policy keys: the raw key.  Wiped for v2 policy keys. */
-	u8			raw[FSCRYPT_MAX_KEY_SIZE];
+	/*
+	 * The bytes of the key, when still needed.  This can be either a raw
+	 * key or a hardware-wrapped key, as indicated by ->is_hw_wrapped.  In
+	 * the case of a raw, v2 policy key, there is no need to remember the
+	 * actual key separately from ->hkdf so this field will be zeroized as
+	 * soon as ->hkdf is initialized.
+	 */
+	u8			bytes[FSCRYPT_MAX_ANY_KEY_SIZE];
 
 } __randomize_layout;
 
 /*
  * fscrypt_master_key - an in-use master key
diff --git a/fs/crypto/hkdf.c b/fs/crypto/hkdf.c
index 5a384dad2c72..7e007810e434 100644
--- a/fs/crypto/hkdf.c
+++ b/fs/crypto/hkdf.c
@@ -2,11 +2,13 @@
 /*
  * Implementation of HKDF ("HMAC-based Extract-and-Expand Key Derivation
  * Function"), aka RFC 5869.  See also the original paper (Krawczyk 2010):
  * "Cryptographic Extraction and Key Derivation: The HKDF Scheme".
  *
- * This is used to derive keys from the fscrypt master keys.
+ * This is used to derive keys from the fscrypt master keys (or from the
+ * "software secrets" which hardware derives from the fscrypt master keys, in
+ * the case that the fscrypt master keys are hardware-wrapped keys).
  *
  * Copyright 2019 Google LLC
  */
 
 #include <crypto/hash.h>
diff --git a/fs/crypto/inline_crypt.c b/fs/crypto/inline_crypt.c
index 7fa53d30aec3..1d008c440cb6 100644
--- a/fs/crypto/inline_crypt.c
+++ b/fs/crypto/inline_crypt.c
@@ -87,11 +87,12 @@ static void fscrypt_log_blk_crypto_impl(struct fscrypt_mode *mode,
 		}
 	}
 }
 
 /* Enable inline encryption for this file if supported. */
-int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci)
+int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci,
+				   bool is_hw_wrapped_key)
 {
 	const struct inode *inode = ci->ci_inode;
 	struct super_block *sb = inode->i_sb;
 	struct blk_crypto_config crypto_cfg;
 	struct block_device **devs;
@@ -128,11 +129,12 @@ int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci)
 	 * crypto configuration that the file would use.
 	 */
 	crypto_cfg.crypto_mode = ci->ci_mode->blk_crypto_mode;
 	crypto_cfg.data_unit_size = 1U << ci->ci_data_unit_bits;
 	crypto_cfg.dun_bytes = fscrypt_get_dun_bytes(ci);
-	crypto_cfg.key_type = BLK_CRYPTO_KEY_TYPE_RAW;
+	crypto_cfg.key_type = is_hw_wrapped_key ?
+		BLK_CRYPTO_KEY_TYPE_HW_WRAPPED : BLK_CRYPTO_KEY_TYPE_RAW;
 
 	devs = fscrypt_get_devices(sb, &num_devs);
 	if (IS_ERR(devs))
 		return PTR_ERR(devs);
 
@@ -149,29 +151,31 @@ int fscrypt_select_encryption_impl(struct fscrypt_inode_info *ci)
 
 	return 0;
 }
 
 int fscrypt_prepare_inline_crypt_key(struct fscrypt_prepared_key *prep_key,
-				     const u8 *raw_key,
+				     const u8 *key_bytes, size_t key_size,
+				     bool is_hw_wrapped,
 				     const struct fscrypt_inode_info *ci)
 {
 	const struct inode *inode = ci->ci_inode;
 	struct super_block *sb = inode->i_sb;
 	enum blk_crypto_mode_num crypto_mode = ci->ci_mode->blk_crypto_mode;
+	enum blk_crypto_key_type key_type = is_hw_wrapped ?
+		BLK_CRYPTO_KEY_TYPE_HW_WRAPPED : BLK_CRYPTO_KEY_TYPE_RAW;
 	struct blk_crypto_key *blk_key;
 	struct block_device **devs;
 	unsigned int num_devs;
 	unsigned int i;
 	int err;
 
 	blk_key = kmalloc(sizeof(*blk_key), GFP_KERNEL);
 	if (!blk_key)
 		return -ENOMEM;
 
-	err = blk_crypto_init_key(blk_key, raw_key, ci->ci_mode->keysize,
-				  BLK_CRYPTO_KEY_TYPE_RAW, crypto_mode,
-				  fscrypt_get_dun_bytes(ci),
+	err = blk_crypto_init_key(blk_key, key_bytes, key_size, key_type,
+				  crypto_mode, fscrypt_get_dun_bytes(ci),
 				  1U << ci->ci_data_unit_bits);
 	if (err) {
 		fscrypt_err(inode, "error %d initializing blk-crypto key", err);
 		goto fail;
 	}
@@ -226,10 +230,38 @@ void fscrypt_destroy_inline_crypt_key(struct super_block *sb,
 		kfree(devs);
 	}
 	kfree_sensitive(blk_key);
 }
 
+/*
+ * Ask the inline encryption hardware to derive the software secret from a
+ * hardware-wrapped key.  Returns -EOPNOTSUPP if hardware-wrapped keys aren't
+ * supported on this filesystem or hardware.
+ */
+int fscrypt_derive_sw_secret(struct super_block *sb,
+			     const u8 *wrapped_key, size_t wrapped_key_size,
+			     u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
+{
+	int err;
+
+	/* The filesystem must be mounted with -o inlinecrypt. */
+	if (!(sb->s_flags & SB_INLINECRYPT)) {
+		fscrypt_warn(NULL,
+			     "%s: filesystem not mounted with inlinecrypt\n",
+			     sb->s_id);
+		return -EOPNOTSUPP;
+	}
+
+	err = blk_crypto_derive_sw_secret(sb->s_bdev, wrapped_key,
+					  wrapped_key_size, sw_secret);
+	if (err == -EOPNOTSUPP)
+		fscrypt_warn(NULL,
+			     "%s: block device doesn't support hardware-wrapped keys\n",
+			     sb->s_id);
+	return err;
+}
+
 bool __fscrypt_inode_uses_inline_crypto(const struct inode *inode)
 {
 	return inode->i_crypt_info->ci_inlinecrypt;
 }
 EXPORT_SYMBOL_GPL(__fscrypt_inode_uses_inline_crypto);
diff --git a/fs/crypto/keyring.c b/fs/crypto/keyring.c
index 787e9c8938ba..9c2f51b62ff2 100644
--- a/fs/crypto/keyring.c
+++ b/fs/crypto/keyring.c
@@ -147,15 +147,15 @@ static inline bool valid_key_spec(const struct fscrypt_key_specifier *spec)
 
 static int fscrypt_user_key_instantiate(struct key *key,
 					struct key_preparsed_payload *prep)
 {
 	/*
-	 * We just charge FSCRYPT_MAX_KEY_SIZE bytes to the user's key quota for
-	 * each key, regardless of the exact key size.  The amount of memory
+	 * We just charge FSCRYPT_MAX_RAW_KEY_SIZE bytes to the user's key quota
+	 * for each key, regardless of the exact key size.  The amount of memory
 	 * actually used is greater than the size of the raw key anyway.
 	 */
-	return key_payload_reserve(key, FSCRYPT_MAX_KEY_SIZE);
+	return key_payload_reserve(key, FSCRYPT_MAX_RAW_KEY_SIZE);
 }
 
 static void fscrypt_user_key_describe(const struct key *key, struct seq_file *m)
 {
 	seq_puts(m, key->description);
@@ -551,50 +551,95 @@ static int do_add_master_key(struct super_block *sb,
 	return err;
 }
 
 static int add_master_key(struct super_block *sb,
 			  struct fscrypt_master_key_secret *secret,
-			  struct fscrypt_key_specifier *key_spec)
+			  struct fscrypt_key_specifier *key_spec, u32 flags)
 {
 	int err;
 
 	if (key_spec->type == FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER) {
-		err = fscrypt_init_hkdf(&secret->hkdf, secret->raw,
-					secret->size);
-		if (err)
-			return err;
+		u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE];
+		u8 *kdf_key = secret->bytes;
+		unsigned int kdf_key_size = secret->size;
+		u8 keyid_kdf_ctx = HKDF_CONTEXT_KEY_IDENTIFIER_FOR_RAW_KEY;
 
 		/*
-		 * Now that the HKDF context is initialized, the raw key is no
-		 * longer needed.
+		 * For raw keys, the fscrypt master key is used directly as the
+		 * fscrypt KDF key.  For hardware-wrapped keys, we have to pass
+		 * the master key to the hardware to derive the KDF key, which
+		 * is then only used to derive non-file-contents subkeys.
+		 */
+		if (secret->is_hw_wrapped) {
+			err = fscrypt_derive_sw_secret(sb, secret->bytes,
+						       secret->size, sw_secret);
+			if (err)
+				return err;
+			kdf_key = sw_secret;
+			kdf_key_size = sizeof(sw_secret);
+			/*
+			 * To avoid weird behavior if someone manages to
+			 * determine sw_secret and add it as a raw key, ensure
+			 * that hardware-wrapped keys and raw keys will have
+			 * different key identifiers by deriving their key
+			 * identifiers using different KDF contexts.
+			 */
+			if (!(flags & FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0))
+				keyid_kdf_ctx =
+					HKDF_CONTEXT_KEY_IDENTIFIER_FOR_HW_WRAPPED_KEY;
+		}
+		err = fscrypt_init_hkdf(&secret->hkdf, kdf_key, kdf_key_size);
+		/*
+		 * Now that the KDF context is initialized, the raw KDF key is
+		 * no longer needed.
 		 */
-		memzero_explicit(secret->raw, secret->size);
+		memzero_explicit(kdf_key, kdf_key_size);
+		if (err)
+			return err;
 
 		/* Calculate the key identifier */
-		err = fscrypt_hkdf_expand(&secret->hkdf,
-					  HKDF_CONTEXT_KEY_IDENTIFIER, NULL, 0,
+		err = fscrypt_hkdf_expand(&secret->hkdf, keyid_kdf_ctx, NULL, 0,
 					  key_spec->u.identifier,
 					  FSCRYPT_KEY_IDENTIFIER_SIZE);
 		if (err)
 			return err;
 	}
 	return do_add_master_key(sb, secret, key_spec);
 }
 
+/*
+ * Validate the size of an fscrypt master key being added.  Note that this is
+ * just an initial check, as we don't know which ciphers will be used yet.
+ * There is a stricter size check later when the key is actually used by a file.
+ */
+static inline bool fscrypt_valid_key_size(size_t size, u32 add_key_flags)
+{
+	u32 max_size = (add_key_flags & (FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0 |
+					 FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1)) ?
+		       FSCRYPT_MAX_HW_WRAPPED_KEY_SIZE :
+		       FSCRYPT_MAX_RAW_KEY_SIZE;
+
+	return size >= FSCRYPT_MIN_KEY_SIZE && size <= max_size;
+}
+
 static int fscrypt_provisioning_key_preparse(struct key_preparsed_payload *prep)
 {
 	const struct fscrypt_provisioning_key_payload *payload = prep->data;
 
-	if (prep->datalen < sizeof(*payload) + FSCRYPT_MIN_KEY_SIZE ||
-	    prep->datalen > sizeof(*payload) + FSCRYPT_MAX_KEY_SIZE)
+	if (prep->datalen < sizeof(*payload))
+		return -EINVAL;
+
+	if (!fscrypt_valid_key_size(prep->datalen - sizeof(*payload),
+				    payload->flags))
 		return -EINVAL;
 
 	if (payload->type != FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
 	    payload->type != FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER)
 		return -EINVAL;
 
-	if (payload->__reserved)
+	if (payload->flags & ~(FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0 |
+			       FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1))
 		return -EINVAL;
 
 	prep->payload.data[0] = kmemdup(payload, prep->datalen, GFP_KERNEL);
 	if (!prep->payload.data[0])
 		return -ENOMEM;
@@ -634,25 +679,25 @@ static struct key_type key_type_fscrypt_provisioning = {
 	.describe		= fscrypt_provisioning_key_describe,
 	.destroy		= fscrypt_provisioning_key_destroy,
 };
 
 /*
- * Retrieve the raw key from the Linux keyring key specified by 'key_id', and
- * store it into 'secret'.
+ * Retrieve the key from the Linux keyring key specified by 'key_id', and store
+ * it into 'secret'.
  *
- * The key must be of type "fscrypt-provisioning" and must have the field
- * fscrypt_provisioning_key_payload::type set to 'type', indicating that it's
- * only usable with fscrypt with the particular KDF version identified by
- * 'type'.  We don't use the "logon" key type because there's no way to
- * completely restrict the use of such keys; they can be used by any kernel API
- * that accepts "logon" keys and doesn't require a specific service prefix.
+ * The key must be of type "fscrypt-provisioning" and must have the 'type' and
+ * 'flags' field of the payload set to the given values, indicating that the key
+ * is intended for use for the specified purpose.  We don't use the "logon" key
+ * type because there's no way to completely restrict the use of such keys; they
+ * can be used by any kernel API that accepts "logon" keys and doesn't require a
+ * specific service prefix.
  *
  * The ability to specify the key via Linux keyring key is intended for cases
  * where userspace needs to re-add keys after the filesystem is unmounted and
- * re-mounted.  Most users should just provide the raw key directly instead.
+ * re-mounted.  Most users should just provide the key directly instead.
  */
-static int get_keyring_key(u32 key_id, u32 type,
+static int get_keyring_key(u32 key_id, u32 type, u32 flags,
 			   struct fscrypt_master_key_secret *secret)
 {
 	key_ref_t ref;
 	struct key *key;
 	const struct fscrypt_provisioning_key_payload *payload;
@@ -665,16 +710,20 @@ static int get_keyring_key(u32 key_id, u32 type,
 
 	if (key->type != &key_type_fscrypt_provisioning)
 		goto bad_key;
 	payload = key->payload.data[0];
 
-	/* Don't allow fscrypt v1 keys to be used as v2 keys and vice versa. */
-	if (payload->type != type)
+	/*
+	 * Don't allow fscrypt v1 keys to be used as v2 keys and vice versa.
+	 * Similarly, don't allow hardware-wrapped keys to be used as
+	 * non-hardware-wrapped keys and vice versa.
+	 */
+	if (payload->type != type || payload->flags != flags)
 		goto bad_key;
 
 	secret->size = key->datalen - sizeof(*payload);
-	memcpy(secret->raw, payload->raw, secret->size);
+	memcpy(secret->bytes, payload->raw, secret->size);
 	err = 0;
 	goto out_put;
 
 bad_key:
 	err = -EKEYREJECTED;
@@ -732,27 +781,52 @@ int fscrypt_ioctl_add_key(struct file *filp, void __user *_uarg)
 	if (arg.key_spec.type == FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR &&
 	    !capable(CAP_SYS_ADMIN))
 		return -EACCES;
 
 	memset(&secret, 0, sizeof(secret));
+
+	if (arg.flags) {
+		if (arg.flags & ~(FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0 |
+				  FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1))
+			return -EINVAL;
+		if (arg.flags & FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0) {
+			if (arg.flags & FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1)
+				return -EINVAL; /* Ambiguous flags */
+			/*
+			 * HW_WRAPPED_V0 keys reuse the same HKDF context byte
+			 * as raw keys when deriving the key identifier.  This
+			 * creates an ambiguity where a file encrypted by a
+			 * HW-wrapped key can be unlocked with the wrong key by
+			 * using a raw key.  Strictly speaking this breaks the
+			 * security model of the fscrypt keyring where different
+			 * keys should have different identifiers.  Thus
+			 * technically HW_WRAPPED_V0 is only safe when keys are
+			 * system-managed, so we require CAP_SYS_ADMIN for it.
+			 */
+			if (!capable(CAP_SYS_ADMIN))
+				return -EACCES;
+		}
+		secret.is_hw_wrapped = true;
+	}
+
 	if (arg.key_id) {
 		if (arg.raw_size != 0)
 			return -EINVAL;
-		err = get_keyring_key(arg.key_id, arg.key_spec.type, &secret);
+		err = get_keyring_key(arg.key_id, arg.key_spec.type, arg.flags,
+				      &secret);
 		if (err)
 			goto out_wipe_secret;
 	} else {
-		if (arg.raw_size < FSCRYPT_MIN_KEY_SIZE ||
-		    arg.raw_size > FSCRYPT_MAX_KEY_SIZE)
+		if (!fscrypt_valid_key_size(arg.raw_size, arg.flags))
 			return -EINVAL;
 		secret.size = arg.raw_size;
 		err = -EFAULT;
-		if (copy_from_user(secret.raw, uarg->raw, secret.size))
+		if (copy_from_user(secret.bytes, uarg->raw, secret.size))
 			goto out_wipe_secret;
 	}
 
-	err = add_master_key(sb, &secret, &arg.key_spec);
+	err = add_master_key(sb, &secret, &arg.key_spec, arg.flags);
 	if (err)
 		goto out_wipe_secret;
 
 	/* Return the key identifier to userspace, if applicable */
 	err = -EFAULT;
@@ -768,31 +842,32 @@ int fscrypt_ioctl_add_key(struct file *filp, void __user *_uarg)
 EXPORT_SYMBOL_GPL(fscrypt_ioctl_add_key);
 
 static void
 fscrypt_get_test_dummy_secret(struct fscrypt_master_key_secret *secret)
 {
-	static u8 test_key[FSCRYPT_MAX_KEY_SIZE];
+	static u8 test_key[FSCRYPT_MAX_RAW_KEY_SIZE];
 
-	get_random_once(test_key, FSCRYPT_MAX_KEY_SIZE);
+	get_random_once(test_key, sizeof(test_key));
 
 	memset(secret, 0, sizeof(*secret));
-	secret->size = FSCRYPT_MAX_KEY_SIZE;
-	memcpy(secret->raw, test_key, FSCRYPT_MAX_KEY_SIZE);
+	secret->size = sizeof(test_key);
+	memcpy(secret->bytes, test_key, sizeof(test_key));
 }
 
 int fscrypt_get_test_dummy_key_identifier(
 				u8 key_identifier[FSCRYPT_KEY_IDENTIFIER_SIZE])
 {
 	struct fscrypt_master_key_secret secret;
 	int err;
 
 	fscrypt_get_test_dummy_secret(&secret);
 
-	err = fscrypt_init_hkdf(&secret.hkdf, secret.raw, secret.size);
+	err = fscrypt_init_hkdf(&secret.hkdf, secret.bytes, secret.size);
 	if (err)
 		goto out;
-	err = fscrypt_hkdf_expand(&secret.hkdf, HKDF_CONTEXT_KEY_IDENTIFIER,
+	err = fscrypt_hkdf_expand(&secret.hkdf,
+				  HKDF_CONTEXT_KEY_IDENTIFIER_FOR_RAW_KEY,
 				  NULL, 0, key_identifier,
 				  FSCRYPT_KEY_IDENTIFIER_SIZE);
 out:
 	wipe_master_key_secret(&secret);
 	return err;
@@ -815,11 +890,11 @@ int fscrypt_add_test_dummy_key(struct super_block *sb,
 {
 	struct fscrypt_master_key_secret secret;
 	int err;
 
 	fscrypt_get_test_dummy_secret(&secret);
-	err = add_master_key(sb, &secret, key_spec);
+	err = add_master_key(sb, &secret, key_spec, 0);
 	wipe_master_key_secret(&secret);
 	return err;
 }
 
 /*
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
index b4fe01ea4bd4..5bbbfdb6a1a3 100644
--- a/fs/crypto/keysetup.c
+++ b/fs/crypto/keysetup.c
@@ -151,11 +151,13 @@ int fscrypt_prepare_key(struct fscrypt_prepared_key *prep_key,
 			const u8 *raw_key, const struct fscrypt_inode_info *ci)
 {
 	struct crypto_skcipher *tfm;
 
 	if (fscrypt_using_inline_encryption(ci))
-		return fscrypt_prepare_inline_crypt_key(prep_key, raw_key, ci);
+		return fscrypt_prepare_inline_crypt_key(prep_key, raw_key,
+							ci->ci_mode->keysize,
+							false, ci);
 
 	tfm = fscrypt_allocate_skcipher(ci->ci_mode, raw_key, ci->ci_inode);
 	if (IS_ERR(tfm))
 		return PTR_ERR(tfm);
 	/*
@@ -193,18 +195,33 @@ static int setup_per_mode_enc_key(struct fscrypt_inode_info *ci,
 	const struct inode *inode = ci->ci_inode;
 	const struct super_block *sb = inode->i_sb;
 	struct fscrypt_mode *mode = ci->ci_mode;
 	const u8 mode_num = mode - fscrypt_modes;
 	struct fscrypt_prepared_key *prep_key;
-	u8 mode_key[FSCRYPT_MAX_KEY_SIZE];
+	u8 mode_key[FSCRYPT_MAX_RAW_KEY_SIZE];
 	u8 hkdf_info[sizeof(mode_num) + sizeof(sb->s_uuid)];
 	unsigned int hkdf_infolen = 0;
+	bool use_hw_wrapped_key = false;
 	int err;
 
 	if (WARN_ON_ONCE(mode_num > FSCRYPT_MODE_MAX))
 		return -EINVAL;
 
+	if (mk->mk_secret.is_hw_wrapped && S_ISREG(inode->i_mode)) {
+		/* Using a hardware-wrapped key for file contents encryption */
+		if (!fscrypt_using_inline_encryption(ci)) {
+			if (sb->s_flags & SB_INLINECRYPT)
+				fscrypt_warn(ci->ci_inode,
+					     "Hardware-wrapped key required, but no suitable inline encryption capabilities are available");
+			else
+				fscrypt_warn(ci->ci_inode,
+					     "Hardware-wrapped keys require inline encryption (-o inlinecrypt)");
+			return -EINVAL;
+		}
+		use_hw_wrapped_key = true;
+	}
+
 	prep_key = &keys[mode_num];
 	if (fscrypt_is_key_prepared(prep_key, ci)) {
 		ci->ci_enc_key = *prep_key;
 		return 0;
 	}
@@ -212,10 +229,20 @@ static int setup_per_mode_enc_key(struct fscrypt_inode_info *ci,
 	mutex_lock(&fscrypt_mode_key_setup_mutex);
 
 	if (fscrypt_is_key_prepared(prep_key, ci))
 		goto done_unlock;
 
+	if (use_hw_wrapped_key) {
+		err = fscrypt_prepare_inline_crypt_key(prep_key,
+						       mk->mk_secret.bytes,
+						       mk->mk_secret.size, true,
+						       ci);
+		if (err)
+			goto out_unlock;
+		goto done_unlock;
+	}
+
 	BUILD_BUG_ON(sizeof(mode_num) != 1);
 	BUILD_BUG_ON(sizeof(sb->s_uuid) != 16);
 	BUILD_BUG_ON(sizeof(hkdf_info) != 17);
 	hkdf_info[hkdf_infolen++] = mode_num;
 	if (include_fs_uuid) {
@@ -334,10 +361,18 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_inode_info *ci,
 				     struct fscrypt_master_key *mk,
 				     bool need_dirhash_key)
 {
 	int err;
 
+	if (mk->mk_secret.is_hw_wrapped &&
+	    !(ci->ci_policy.v2.flags & (FSCRYPT_POLICY_FLAG_IV_INO_LBLK_64 |
+					FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32))) {
+		fscrypt_warn(ci->ci_inode,
+			     "Hardware-wrapped keys are only supported with IV_INO_LBLK policies");
+		return -EINVAL;
+	}
+
 	if (ci->ci_policy.v2.flags & FSCRYPT_POLICY_FLAG_DIRECT_KEY) {
 		/*
 		 * DIRECT_KEY: instead of deriving per-file encryption keys, the
 		 * per-file nonce will be included in all the IVs.  But unlike
 		 * v1 policies, for v2 policies in this case we don't encrypt
@@ -360,11 +395,11 @@ static int fscrypt_setup_v2_file_key(struct fscrypt_inode_info *ci,
 					     true);
 	} else if (ci->ci_policy.v2.flags &
 		   FSCRYPT_POLICY_FLAG_IV_INO_LBLK_32) {
 		err = fscrypt_setup_iv_ino_lblk_32_key(ci, mk);
 	} else {
-		u8 derived_key[FSCRYPT_MAX_KEY_SIZE];
+		u8 derived_key[FSCRYPT_MAX_RAW_KEY_SIZE];
 
 		err = fscrypt_hkdf_expand(&mk->mk_secret.hkdf,
 					  HKDF_CONTEXT_PER_FILE_ENC_KEY,
 					  ci->ci_nonce, FSCRYPT_FILE_NONCE_SIZE,
 					  derived_key, ci->ci_mode->keysize);
@@ -443,14 +478,10 @@ static int setup_file_encryption_key(struct fscrypt_inode_info *ci,
 	struct super_block *sb = ci->ci_inode->i_sb;
 	struct fscrypt_key_specifier mk_spec;
 	struct fscrypt_master_key *mk;
 	int err;
 
-	err = fscrypt_select_encryption_impl(ci);
-	if (err)
-		return err;
-
 	err = fscrypt_policy_to_key_spec(&ci->ci_policy, &mk_spec);
 	if (err)
 		return err;
 
 	mk = fscrypt_find_master_key(sb, &mk_spec);
@@ -474,10 +505,14 @@ static int setup_file_encryption_key(struct fscrypt_inode_info *ci,
 	}
 	if (unlikely(!mk)) {
 		if (ci->ci_policy.version != FSCRYPT_POLICY_V1)
 			return -ENOKEY;
 
+		err = fscrypt_select_encryption_impl(ci, false);
+		if (err)
+			return err;
+
 		/*
 		 * As a legacy fallback for v1 policies, search for the key in
 		 * the current task's subscribed keyrings too.  Don't move this
 		 * to before the search of ->s_master_keys, since users
 		 * shouldn't be able to override filesystem-level keys.
@@ -495,13 +530,25 @@ static int setup_file_encryption_key(struct fscrypt_inode_info *ci,
 	if (!fscrypt_valid_master_key_size(mk, ci)) {
 		err = -ENOKEY;
 		goto out_release_key;
 	}
 
+	err = fscrypt_select_encryption_impl(ci, mk->mk_secret.is_hw_wrapped);
+	if (err)
+		goto out_release_key;
+
 	switch (ci->ci_policy.version) {
 	case FSCRYPT_POLICY_V1:
-		err = fscrypt_setup_v1_file_key(ci, mk->mk_secret.raw);
+		if (WARN_ON(mk->mk_secret.is_hw_wrapped)) {
+			/*
+			 * This should never happen, as adding a v1 policy key
+			 * that is hardware-wrapped isn't allowed.
+			 */
+			err = -EINVAL;
+			goto out_release_key;
+		}
+		err = fscrypt_setup_v1_file_key(ci, mk->mk_secret.bytes);
 		break;
 	case FSCRYPT_POLICY_V2:
 		err = fscrypt_setup_v2_file_key(ci, mk, need_dirhash_key);
 		break;
 	default:
diff --git a/fs/crypto/keysetup_v1.c b/fs/crypto/keysetup_v1.c
index cf3b58ec32cc..b70521c55132 100644
--- a/fs/crypto/keysetup_v1.c
+++ b/fs/crypto/keysetup_v1.c
@@ -116,11 +116,11 @@ find_and_lock_process_key(const char *prefix,
 		goto invalid;
 
 	payload = (const struct fscrypt_key *)ukp->data;
 
 	if (ukp->datalen != sizeof(struct fscrypt_key) ||
-	    payload->size < 1 || payload->size > FSCRYPT_MAX_KEY_SIZE) {
+	    payload->size < 1 || payload->size > sizeof(payload->raw)) {
 		fscrypt_warn(NULL,
 			     "key with description '%s' has invalid payload",
 			     key->description);
 		goto invalid;
 	}
@@ -147,11 +147,11 @@ struct fscrypt_direct_key {
 	struct hlist_node		dk_node;
 	refcount_t			dk_refcount;
 	const struct fscrypt_mode	*dk_mode;
 	struct fscrypt_prepared_key	dk_key;
 	u8				dk_descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
-	u8				dk_raw[FSCRYPT_MAX_KEY_SIZE];
+	u8				dk_raw[FSCRYPT_MAX_RAW_KEY_SIZE];
 };
 
 static void free_direct_key(struct fscrypt_direct_key *dk)
 {
 	if (dk) {
diff --git a/include/uapi/linux/fscrypt.h b/include/uapi/linux/fscrypt.h
index 7a8f4c290187..6246a190934a 100644
--- a/include/uapi/linux/fscrypt.h
+++ b/include/uapi/linux/fscrypt.h
@@ -117,20 +117,23 @@ struct fscrypt_key_specifier {
  * Payload of Linux keyring key of type "fscrypt-provisioning", referenced by
  * fscrypt_add_key_arg::key_id as an alternative to fscrypt_add_key_arg::raw.
  */
 struct fscrypt_provisioning_key_payload {
 	__u32 type;
-	__u32 __reserved;
+	__u32 flags;
 	__u8 raw[];
 };
 
 /* Struct passed to FS_IOC_ADD_ENCRYPTION_KEY */
 struct fscrypt_add_key_arg {
 	struct fscrypt_key_specifier key_spec;
 	__u32 raw_size;
 	__u32 key_id;
-	__u32 __reserved[8];
+#define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V0	0x00000001
+#define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED_V1	0x00000002
+	__u32 flags;
+	__u32 __reserved[7];
 	__u8 raw[];
 };
 
 /* Struct passed to FS_IOC_REMOVE_ENCRYPTION_KEY */
 struct fscrypt_remove_key_arg {
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 14/15] soc: qcom: ice: add HWKM support to the ICE driver
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (12 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 13/15] fscrypt: add support for " Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2024-12-13  4:19 ` [PATCH v10 15/15] ufs: qcom: add support for wrapped keys Eric Biggers
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Gaurav Kashyap <quic_gaurkash@quicinc.com>

Qualcomm's ICE (Inline Crypto Engine) contains a proprietary key
management hardware called Hardware Key Manager (HWKM). Add HWKM support
to the ICE driver if it is available on the platform. HWKM primarily
provides hardware wrapped key support where the ICE (storage) keys are
not available in software and instead protected in hardware.

When HWKM software support is not fully available (from Trustzone),
there can be a scenario where the ICE hardware supports HWKM, but it
cannot be used for wrapped keys. In this case, raw keys have to be used
without using the HWKM. We query the TZ at run-time to find out whether
wrapped keys support is available.

The selection of HWKM vs non-HWKM mode has to be made at boot time, so
add a module parameter qcom_ice.use_wrapped_keys=1 that enables HWKM.

Signed-off-by: Gaurav Kashyap <quic_gaurkash@quicinc.com>
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
[EB: merged related patches, fixed the module parameter to work
     correctly, fixed error handling, improved log messages, improved
     comments, improved commit message, fixed various names.]
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/mmc/host/sdhci-msm.c |   5 +
 drivers/soc/qcom/ice.c       | 360 ++++++++++++++++++++++++++++++++++-
 drivers/ufs/host/ufs-qcom.c  |   5 +
 include/soc/qcom/ice.h       |  12 ++
 4 files changed, 377 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 439e5907f940..35e937e4659d 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -1831,10 +1831,15 @@ static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
 	}
 
 	if (IS_ERR_OR_NULL(ice))
 		return PTR_ERR_OR_ZERO(ice);
 
+	if (qcom_ice_using_hwkm(ice)) {
+		dev_warn(dev, "HWKM mode unsupported; disabling inline encryption support\n");
+		return 0;
+	}
+
 	msm_host->ice = ice;
 
 	/* Initialize the blk_crypto_profile */
 
 	caps.reg_val = cpu_to_le32(cqhci_readl(cq_host, CQHCI_CCAP));
diff --git a/drivers/soc/qcom/ice.c b/drivers/soc/qcom/ice.c
index 78780fd508f0..23027236fd43 100644
--- a/drivers/soc/qcom/ice.c
+++ b/drivers/soc/qcom/ice.c
@@ -20,34 +20,106 @@
 
 #include <soc/qcom/ice.h>
 
 #define AES_256_XTS_KEY_SIZE			64
 
+/*
+ * Wrapped key sizes that HWKM expects and manages is different for different
+ * versions of the hardware.
+ */
+#define QCOM_ICE_HWKM_WRAPPED_KEY_SIZE(v)	\
+	((v) == 1 ? 68 : 100)
+
 /* QCOM ICE registers */
 #define QCOM_ICE_REG_VERSION			0x0008
 #define QCOM_ICE_REG_FUSE_SETTING		0x0010
 #define QCOM_ICE_REG_BIST_STATUS		0x0070
 #define QCOM_ICE_REG_ADVANCED_CONTROL		0x1000
+#define QCOM_ICE_REG_CONTROL			0x0
+#define QCOM_ICE_LUT_KEYS_CRYPTOCFG_R16		0x4040
+
+/* QCOM ICE HWKM registers */
+#define QCOM_ICE_REG_HWKM_TZ_KM_CTL			0x1000
+#define QCOM_ICE_REG_HWKM_TZ_KM_STATUS			0x1004
+#define QCOM_ICE_REG_HWKM_BANK0_BANKN_IRQ_STATUS	0x2008
+#define QCOM_ICE_REG_HWKM_BANK0_BBAC_0			0x5000
+#define QCOM_ICE_REG_HWKM_BANK0_BBAC_1			0x5004
+#define QCOM_ICE_REG_HWKM_BANK0_BBAC_2			0x5008
+#define QCOM_ICE_REG_HWKM_BANK0_BBAC_3			0x500C
+#define QCOM_ICE_REG_HWKM_BANK0_BBAC_4			0x5010
+
+/* QCOM ICE HWKM reg vals */
+#define QCOM_ICE_HWKM_BIST_DONE_V1		BIT(16)
+#define QCOM_ICE_HWKM_BIST_DONE_V2		BIT(9)
+#define QCOM_ICE_HWKM_BIST_DONE(ver)		QCOM_ICE_HWKM_BIST_DONE_V##ver
+
+#define QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V1		BIT(14)
+#define QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V2		BIT(7)
+#define QCOM_ICE_HWKM_CRYPTO_BIST_DONE(v)		QCOM_ICE_HWKM_CRYPTO_BIST_DONE_V##v
+
+#define QCOM_ICE_HWKM_BOOT_CMD_LIST1_DONE		BIT(2)
+#define QCOM_ICE_HWKM_BOOT_CMD_LIST0_DONE		BIT(1)
+#define QCOM_ICE_HWKM_KT_CLEAR_DONE			BIT(0)
+
+#define QCOM_ICE_HWKM_BIST_VAL(v)	(QCOM_ICE_HWKM_BIST_DONE(v) |		\
+					QCOM_ICE_HWKM_CRYPTO_BIST_DONE(v) |	\
+					QCOM_ICE_HWKM_BOOT_CMD_LIST1_DONE |	\
+					QCOM_ICE_HWKM_BOOT_CMD_LIST0_DONE |	\
+					QCOM_ICE_HWKM_KT_CLEAR_DONE)
+
+#define QCOM_ICE_HWKM_V1_STANDARD_MODE_VAL	(BIT(0) | BIT(1) | BIT(2))
+#define QCOM_ICE_HWKM_V2_STANDARD_MODE_MASK	GENMASK(31, 1)
+#define QCOM_ICE_HWKM_DISABLE_CRC_CHECKS_VAL	(BIT(1) | BIT(2))
+#define QCOM_ICE_HWKM_RSP_FIFO_CLEAR_VAL	BIT(3)
+
+#define QCOM_ICE_HWKM_CFG_ENABLE_VAL		BIT(7)
 
 /* BIST ("built-in self-test") status flags */
 #define QCOM_ICE_BIST_STATUS_MASK		GENMASK(31, 28)
 
 #define QCOM_ICE_FUSE_SETTING_MASK		0x1
 #define QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK	0x2
 #define QCOM_ICE_FORCE_HW_KEY1_SETTING_MASK	0x4
 
+#define QCOM_ICE_LUT_KEYS_CRYPTOCFG_OFFSET	0x80
+
+#define QCOM_ICE_HWKM_REG_OFFSET	0x8000
+#define HWKM_OFFSET(reg)		((reg) + QCOM_ICE_HWKM_REG_OFFSET)
+
 #define qcom_ice_writel(engine, val, reg)	\
 	writel((val), (engine)->base + (reg))
 
 #define qcom_ice_readl(engine, reg)	\
 	readl((engine)->base + (reg))
 
+#define QCOM_ICE_LUT_CRYPTOCFG_SLOT_OFFSET(slot) \
+	(QCOM_ICE_LUT_KEYS_CRYPTOCFG_R16 + \
+	 QCOM_ICE_LUT_KEYS_CRYPTOCFG_OFFSET * slot)
+
+static bool qcom_ice_use_wrapped_keys;
+module_param_named(use_wrapped_keys, qcom_ice_use_wrapped_keys, bool, 0660);
+MODULE_PARM_DESC(use_wrapped_keys,
+		 "Support wrapped keys instead of raw keys, if available on the platform");
+
 struct qcom_ice {
 	struct device *dev;
 	void __iomem *base;
 
 	struct clk *core_clk;
+	u8 hwkm_version;
+	bool use_hwkm;
+	bool hwkm_init_complete;
+};
+
+union crypto_cfg {
+	__le32 regval;
+	struct {
+		u8 dusize;
+		u8 capidx;
+		u8 reserved;
+		u8 cfge;
+	};
 };
 
 static bool qcom_ice_check_supported(struct qcom_ice *ice)
 {
 	u32 regval = qcom_ice_readl(ice, QCOM_ICE_REG_VERSION);
@@ -61,12 +133,22 @@ static bool qcom_ice_check_supported(struct qcom_ice *ice)
 		dev_warn(dev, "Unsupported ICE version: v%d.%d.%d\n",
 			 major, minor, step);
 		return false;
 	}
 
-	dev_info(dev, "Found QC Inline Crypto Engine (ICE) v%d.%d.%d\n",
-		 major, minor, step);
+	if (major >= 4 || (major == 3 && minor == 2 && step >= 1))
+		ice->hwkm_version = 2;
+	else if (major == 3 && minor == 2)
+		ice->hwkm_version = 1;
+	else
+		ice->hwkm_version = 0;
+
+	if (ice->hwkm_version == 0)
+		ice->use_hwkm = false;
+
+	dev_info(dev, "Found QC Inline Crypto Engine (ICE) v%d.%d.%d, HWKM v%d\n",
+		 major, minor, step, ice->hwkm_version);
 
 	/* If fuses are blown, ICE might not work in the standard way. */
 	regval = qcom_ice_readl(ice, QCOM_ICE_REG_FUSE_SETTING);
 	if (regval & (QCOM_ICE_FUSE_SETTING_MASK |
 		      QCOM_ICE_FORCE_HW_KEY0_SETTING_MASK |
@@ -111,31 +193,109 @@ static void qcom_ice_optimization_enable(struct qcom_ice *ice)
  * because (a) the BIST is a FIPS compliance thing that never fails in
  * practice, (b) ICE is documented to reject crypto requests if the BIST
  * fails, so we needn't do it in software too, and (c) properly testing
  * storage encryption requires testing the full storage stack anyway,
  * and not relying on hardware-level self-tests.
+ *
+ * However, we still care about if HWKM BIST failed (when supported) as
+ * important functionality would fail later, so disable hwkm on failure.
  */
 static int qcom_ice_wait_bist_status(struct qcom_ice *ice)
 {
 	u32 regval;
+	u32 bist_done_val;
 	int err;
 
 	err = readl_poll_timeout(ice->base + QCOM_ICE_REG_BIST_STATUS,
 				 regval, !(regval & QCOM_ICE_BIST_STATUS_MASK),
 				 50, 5000);
-	if (err)
+	if (err) {
 		dev_err(ice->dev, "Timed out waiting for ICE self-test to complete\n");
+		return err;
+	}
 
+	if (ice->use_hwkm) {
+		bist_done_val = ice->hwkm_version == 1 ?
+				QCOM_ICE_HWKM_BIST_VAL(1) :
+				QCOM_ICE_HWKM_BIST_VAL(2);
+		if (qcom_ice_readl(ice,
+				   HWKM_OFFSET(QCOM_ICE_REG_HWKM_TZ_KM_STATUS)) !=
+				   bist_done_val) {
+			dev_err(ice->dev, "HWKM BIST error\n");
+			ice->use_hwkm = false;
+			err = -ENODEV;
+		}
+	}
 	return err;
 }
 
+static void qcom_ice_enable_hwkm_mode(struct qcom_ice *ice)
+{
+	u32 val = 0;
+
+	/*
+	 * When ICE is in standard (hwkm) mode, it supports HW wrapped
+	 * keys, and when it is in legacy mode, it only supports raw keys.
+	 *
+	 * Put ICE in standard mode, ICE defaults to legacy mode.
+	 * Legacy mode - ICE HWKM slave not supported.
+	 * Standard mode - ICE HWKM slave supported.
+	 *
+	 * Depending on the version of HWKM, it is controlled by different
+	 * registers in ICE.
+	 */
+	if (ice->hwkm_version >= 2) {
+		val = qcom_ice_readl(ice, QCOM_ICE_REG_CONTROL);
+		val = val & QCOM_ICE_HWKM_V2_STANDARD_MODE_MASK;
+		qcom_ice_writel(ice, val, QCOM_ICE_REG_CONTROL);
+	} else {
+		qcom_ice_writel(ice, QCOM_ICE_HWKM_V1_STANDARD_MODE_VAL,
+				HWKM_OFFSET(QCOM_ICE_REG_HWKM_TZ_KM_CTL));
+	}
+}
+
+static void qcom_ice_hwkm_init(struct qcom_ice *ice)
+{
+	/* Disable CRC checks. This HWKM feature is not used. */
+	qcom_ice_writel(ice, QCOM_ICE_HWKM_DISABLE_CRC_CHECKS_VAL,
+			HWKM_OFFSET(QCOM_ICE_REG_HWKM_TZ_KM_CTL));
+
+	/*
+	 * Give register bank of the HWKM slave access to read and modify
+	 * the keyslots in ICE HWKM slave. Without this, trustzone will not
+	 * be able to program keys into ICE.
+	 */
+	qcom_ice_writel(ice, GENMASK(31, 0), HWKM_OFFSET(QCOM_ICE_REG_HWKM_BANK0_BBAC_0));
+	qcom_ice_writel(ice, GENMASK(31, 0), HWKM_OFFSET(QCOM_ICE_REG_HWKM_BANK0_BBAC_1));
+	qcom_ice_writel(ice, GENMASK(31, 0), HWKM_OFFSET(QCOM_ICE_REG_HWKM_BANK0_BBAC_2));
+	qcom_ice_writel(ice, GENMASK(31, 0), HWKM_OFFSET(QCOM_ICE_REG_HWKM_BANK0_BBAC_3));
+	qcom_ice_writel(ice, GENMASK(31, 0), HWKM_OFFSET(QCOM_ICE_REG_HWKM_BANK0_BBAC_4));
+
+	/* Clear HWKM response FIFO before doing anything */
+	qcom_ice_writel(ice, QCOM_ICE_HWKM_RSP_FIFO_CLEAR_VAL,
+			HWKM_OFFSET(QCOM_ICE_REG_HWKM_BANK0_BANKN_IRQ_STATUS));
+	ice->hwkm_init_complete = true;
+}
+
 int qcom_ice_enable(struct qcom_ice *ice)
 {
+	int err;
+
 	qcom_ice_low_power_mode_enable(ice);
 	qcom_ice_optimization_enable(ice);
 
-	return qcom_ice_wait_bist_status(ice);
+	if (ice->use_hwkm)
+		qcom_ice_enable_hwkm_mode(ice);
+
+	err = qcom_ice_wait_bist_status(ice);
+	if (err)
+		return err;
+
+	if (ice->use_hwkm)
+		qcom_ice_hwkm_init(ice);
+
+	return err;
 }
 EXPORT_SYMBOL_GPL(qcom_ice_enable);
 
 int qcom_ice_resume(struct qcom_ice *ice)
 {
@@ -147,22 +307,71 @@ int qcom_ice_resume(struct qcom_ice *ice)
 		dev_err(dev, "failed to enable core clock (%d)\n",
 			err);
 		return err;
 	}
 
+	if (ice->use_hwkm) {
+		qcom_ice_enable_hwkm_mode(ice);
+		qcom_ice_hwkm_init(ice);
+	}
 	return qcom_ice_wait_bist_status(ice);
 }
 EXPORT_SYMBOL_GPL(qcom_ice_resume);
 
 int qcom_ice_suspend(struct qcom_ice *ice)
 {
 	clk_disable_unprepare(ice->core_clk);
+	ice->hwkm_init_complete = false;
 
 	return 0;
 }
 EXPORT_SYMBOL_GPL(qcom_ice_suspend);
 
+/* For v1 the ICE slot is calculated in TrustZone. */
+static int translate_hwkm_slot(struct qcom_ice *ice, int slot)
+{
+	return (ice->hwkm_version == 1) ? slot : (slot * 2);
+}
+
+static int qcom_ice_program_wrapped_key(struct qcom_ice *ice, unsigned int slot,
+					const struct blk_crypto_key *bkey)
+{
+	struct device *dev = ice->dev;
+	union crypto_cfg cfg = {
+		.dusize = bkey->crypto_cfg.data_unit_size / 512,
+		.capidx = QCOM_SCM_ICE_CIPHER_AES_256_XTS,
+		.cfge = QCOM_ICE_HWKM_CFG_ENABLE_VAL,
+	};
+	int hwkm_slot;
+	int err;
+
+	/* It is expected that HWKM init has completed before programming wrapped keys */
+	if (!ice->use_hwkm || !ice->hwkm_init_complete) {
+		dev_err_ratelimited(dev, "HWKM not currently used or initialized\n");
+		return -EINVAL;
+	}
+
+	hwkm_slot = translate_hwkm_slot(ice, slot);
+
+	/* Clear CFGE */
+	qcom_ice_writel(ice, 0x0, QCOM_ICE_LUT_CRYPTOCFG_SLOT_OFFSET(slot));
+
+	/* Call trustzone to program the wrapped key using hwkm */
+	err = qcom_scm_ice_set_key(hwkm_slot, bkey->bytes, bkey->size,
+				   cfg.capidx, cfg.dusize);
+	if (err) {
+		pr_err("%s:SCM call Error: 0x%x slot %d\n", __func__, err,
+		       slot);
+		return err;
+	}
+
+	/* Enable CFGE after programming key */
+	qcom_ice_writel(ice, cfg.regval, QCOM_ICE_LUT_CRYPTOCFG_SLOT_OFFSET(slot));
+
+	return err;
+}
+
 int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
 			 const struct blk_crypto_key *blk_key)
 {
 	struct device *dev = ice->dev;
 	union {
@@ -178,10 +387,18 @@ int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
 		dev_err_ratelimited(dev, "Unsupported crypto mode: %d\n",
 				    blk_key->crypto_cfg.crypto_mode);
 		return -EINVAL;
 	}
 
+	if (blk_key->crypto_cfg.key_type == BLK_CRYPTO_KEY_TYPE_HW_WRAPPED)
+		return qcom_ice_program_wrapped_key(ice, slot, blk_key);
+
+	if (ice->use_hwkm) {
+		dev_err_ratelimited(dev, "Unsupported raw key when in HWKM mode\n");
+		return -EINVAL;
+	}
+
 	if (blk_key->size != AES_256_XTS_KEY_SIZE) {
 		dev_err_ratelimited(dev, "Incorrect key size\n");
 		return -EINVAL;
 	}
 	memcpy(key.bytes, blk_key->bytes, AES_256_XTS_KEY_SIZE);
@@ -200,14 +417,137 @@ int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
 }
 EXPORT_SYMBOL_GPL(qcom_ice_program_key);
 
 int qcom_ice_evict_key(struct qcom_ice *ice, int slot)
 {
-	return qcom_scm_ice_invalidate_key(slot);
+	int hwkm_slot = slot;
+
+	if (ice->use_hwkm) {
+		hwkm_slot = translate_hwkm_slot(ice, slot);
+
+		/*
+		 * Ignore calls to evict key when HWKM is supported and hwkm
+		 * init is not yet done. This is to avoid the clearing all
+		 * slots call during a storage reset when ICE is still in
+		 * legacy mode. HWKM slave in ICE takes care of zeroing out
+		 * the keytable on reset.
+		 */
+		if (!ice->hwkm_init_complete)
+			return 0;
+	}
+
+	return qcom_scm_ice_invalidate_key(hwkm_slot);
 }
 EXPORT_SYMBOL_GPL(qcom_ice_evict_key);
 
+bool qcom_ice_using_hwkm(struct qcom_ice *ice)
+{
+	return ice->use_hwkm;
+}
+EXPORT_SYMBOL_GPL(qcom_ice_using_hwkm);
+
+/*
+ * Derive a software secret from a hardware-wrapped key. The key is unwrapped in
+ * hardware from TrustZone and a software key/secret is then derived from it.
+ */
+int qcom_ice_derive_sw_secret(struct qcom_ice *ice,
+			      const u8 *eph_key, size_t eph_key_size,
+			      u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
+{
+	return qcom_scm_derive_sw_secret(eph_key, eph_key_size,
+					 sw_secret, BLK_CRYPTO_SW_SECRET_SIZE);
+}
+EXPORT_SYMBOL_GPL(qcom_ice_derive_sw_secret);
+
+/**
+ * qcom_ice_generate_key() - Generate a wrapped key for inline encryption
+ * @ice: ICE driver data
+ * @lt_key: buffer for the resulting long-term wrapped key
+ *
+ * Make an SCM call into TrustZone to generate a wrapped key for storage
+ * encryption using HWKM.
+ *
+ * Return: the size of the resulting wrapped key on success; -errno on failure.
+ */
+int qcom_ice_generate_key(struct qcom_ice *ice,
+			  u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	size_t wk_size = QCOM_ICE_HWKM_WRAPPED_KEY_SIZE(ice->hwkm_version);
+	int err;
+
+	if (WARN_ON_ONCE(wk_size > BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE))
+		return -EINVAL;
+
+	err = qcom_scm_generate_ice_key(lt_key, wk_size);
+	if (err)
+		return err;
+
+	return wk_size;
+}
+EXPORT_SYMBOL_GPL(qcom_ice_generate_key);
+
+/**
+ * qcom_ice_prepare_key() - Prepare a wrapped key for inline encryption
+ * @ice: ICE driver data
+ * @lt_key: a long-term wrapped key
+ * @lt_key_size: size of the long-term wrapped_key
+ * @eph_key: buffer for the resulting ephemerally-wrapped key
+ *
+ * Make an SCM call into TrustZone to prepare a wrapped key for storage
+ * encryption by rewrapping a long-term wrapped key with a per-boot ephemeral
+ * key using HWKM.
+ *
+ * Return: the size of the resulting wrapped key on success; -errno on failure.
+ */
+int qcom_ice_prepare_key(struct qcom_ice *ice,
+			 const u8 *lt_key, size_t lt_key_size,
+			 u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	size_t wk_size = QCOM_ICE_HWKM_WRAPPED_KEY_SIZE(ice->hwkm_version);
+	int err;
+
+	if (WARN_ON_ONCE(wk_size > BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE))
+		return -EINVAL;
+
+	err = qcom_scm_prepare_ice_key(lt_key, lt_key_size, eph_key, wk_size);
+	if (err)
+		return err;
+
+	return wk_size;
+}
+EXPORT_SYMBOL_GPL(qcom_ice_prepare_key);
+
+/**
+ * qcom_ice_import_key() - Import a raw key for inline encryption
+ * @ice: ICE driver data
+ * @raw_key: raw key that will be imported
+ * @raw_key_size: size of the raw key
+ * @lt_key: buffer for the resulting long-term wrapped key
+ *
+ * Make an SCM call into TrustZone to import a raw key for storage encryption
+ * and generate a long-term wrapped key using HWKM.
+ *
+ * Return: the size of the resulting wrapped key on success; -errno on failure.
+ */
+int qcom_ice_import_key(struct qcom_ice *ice,
+			const u8 *raw_key, size_t raw_key_size,
+			u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	size_t wk_size = QCOM_ICE_HWKM_WRAPPED_KEY_SIZE(ice->hwkm_version);
+	int err;
+
+	if (WARN_ON_ONCE(wk_size > BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE))
+		return -EINVAL;
+
+	err = qcom_scm_import_ice_key(raw_key, raw_key_size, lt_key, wk_size);
+	if (err)
+		return err;
+
+	return wk_size;
+}
+EXPORT_SYMBOL_GPL(qcom_ice_import_key);
+
 static struct qcom_ice *qcom_ice_create(struct device *dev,
 					void __iomem *base)
 {
 	struct qcom_ice *engine;
 
@@ -239,13 +579,23 @@ static struct qcom_ice *qcom_ice_create(struct device *dev,
 	if (!engine->core_clk)
 		engine->core_clk = devm_clk_get_enabled(dev, NULL);
 	if (IS_ERR(engine->core_clk))
 		return ERR_CAST(engine->core_clk);
 
+	engine->use_hwkm = qcom_ice_use_wrapped_keys &&
+			   qcom_scm_has_wrapped_key_support();
+
 	if (!qcom_ice_check_supported(engine))
 		return ERR_PTR(-EOPNOTSUPP);
 
+	if (engine->use_hwkm)
+		dev_info(dev, "QC ICE HWKM (Hardware Key Manager) enabled");
+	else if (qcom_ice_use_wrapped_keys)
+		dev_warn(dev, "HWKM not supported. Not supporting wrapped keys.\n");
+	else
+		dev_info(dev, "HWKM not enabled. Supporting raw keys.");
+
 	dev_dbg(dev, "Registered Qualcomm Inline Crypto Engine\n");
 
 	return engine;
 }
 
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 4adf017b523d..9c700bbaa12c 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -132,10 +132,15 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
 	}
 
 	if (IS_ERR_OR_NULL(ice))
 		return PTR_ERR_OR_ZERO(ice);
 
+	if (qcom_ice_using_hwkm(ice)) {
+		dev_warn(dev, "HWKM mode unsupported; disabling inline encryption support\n");
+		return 0;
+	}
+
 	host->ice = ice;
 
 	/* Initialize the blk_crypto_profile */
 
 	caps.reg_val = cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
diff --git a/include/soc/qcom/ice.h b/include/soc/qcom/ice.h
index 4cecc7f088b4..f352c78d27a1 100644
--- a/include/soc/qcom/ice.h
+++ b/include/soc/qcom/ice.h
@@ -15,7 +15,19 @@ int qcom_ice_enable(struct qcom_ice *ice);
 int qcom_ice_resume(struct qcom_ice *ice);
 int qcom_ice_suspend(struct qcom_ice *ice);
 int qcom_ice_program_key(struct qcom_ice *ice, unsigned int slot,
 			 const struct blk_crypto_key *blk_key);
 int qcom_ice_evict_key(struct qcom_ice *ice, int slot);
+bool qcom_ice_using_hwkm(struct qcom_ice *ice);
+int qcom_ice_derive_sw_secret(struct qcom_ice *ice,
+			      const u8 *eph_key, size_t eph_key_size,
+			      u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]);
+int qcom_ice_generate_key(struct qcom_ice *ice,
+			  u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+int qcom_ice_prepare_key(struct qcom_ice *ice,
+			 const u8 *lt_key, size_t lt_key_size,
+			 u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
+int qcom_ice_import_key(struct qcom_ice *ice,
+			const u8 *raw_key, size_t raw_key_size,
+			u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]);
 struct qcom_ice *of_qcom_ice_get(struct device *dev);
 #endif /* __QCOM_ICE_H__ */
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v10 15/15] ufs: qcom: add support for wrapped keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (13 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 14/15] soc: qcom: ice: add HWKM support to the ICE driver Eric Biggers
@ 2024-12-13  4:19 ` Eric Biggers
  2025-01-02 18:38 ` [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2024-12-13  4:19 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Bjorn Andersson, Dmitry Baryshkov, James E . J . Bottomley,
	Jens Axboe, Konrad Dybcio, Manivannan Sadhasivam,
	Martin K . Petersen, Ulf Hansson, Bartosz Golaszewski

From: Eric Biggers <ebiggers@google.com>

Wire up the wrapped key support for ufs-qcom by implementing the needed
methods in struct blk_crypto_ll_ops and setting the appropriate flags in
blk_crypto_profile::key_types_supported.

For more information about this feature and how to use it, refer to
the sections about hardware-wrapped keys in
Documentation/block/inline-encryption.rst and
Documentation/filesystems/fscrypt.rst.

Based on patches by Gaurav Kashyap <quic_gaurkash@quicinc.com>.
Reworked to use the custom crypto profile support.

Tested-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> # sm8650
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 drivers/ufs/host/ufs-qcom.c | 54 ++++++++++++++++++++++++++++++++-----
 1 file changed, 48 insertions(+), 6 deletions(-)

diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 9c700bbaa12c..c9cca4348dab 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -132,15 +132,10 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
 	}
 
 	if (IS_ERR_OR_NULL(ice))
 		return PTR_ERR_OR_ZERO(ice);
 
-	if (qcom_ice_using_hwkm(ice)) {
-		dev_warn(dev, "HWKM mode unsupported; disabling inline encryption support\n");
-		return 0;
-	}
-
 	host->ice = ice;
 
 	/* Initialize the blk_crypto_profile */
 
 	caps.reg_val = cpu_to_le32(ufshcd_readl(hba, REG_UFS_CCAP));
@@ -150,11 +145,14 @@ static int ufs_qcom_ice_init(struct ufs_qcom_host *host)
 	if (err)
 		return err;
 
 	profile->ll_ops = ufs_qcom_crypto_ops;
 	profile->max_dun_bytes_supported = 8;
-	profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
+	if (qcom_ice_using_hwkm(ice))
+		profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_HW_WRAPPED;
+	else
+		profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW;
 	profile->dev = dev;
 
 	/*
 	 * Currently this driver only supports AES-256-XTS.  All known versions
 	 * of ICE support it, but to be safe make sure it is really declared in
@@ -218,13 +216,57 @@ static int ufs_qcom_ice_keyslot_evict(struct blk_crypto_profile *profile,
 	err = qcom_ice_evict_key(host->ice, slot);
 	ufshcd_release(hba);
 	return err;
 }
 
+static int ufs_qcom_ice_derive_sw_secret(struct blk_crypto_profile *profile,
+					 const u8 *eph_key, size_t eph_key_size,
+					 u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE])
+{
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
+	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
+	return qcom_ice_derive_sw_secret(host->ice, eph_key, eph_key_size,
+					 sw_secret);
+}
+
+static int ufs_qcom_ice_import_key(struct blk_crypto_profile *profile,
+				   const u8 *raw_key, size_t raw_key_size,
+				   u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
+	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
+	return qcom_ice_import_key(host->ice, raw_key, raw_key_size, lt_key);
+}
+
+static int ufs_qcom_ice_generate_key(struct blk_crypto_profile *profile,
+				     u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
+	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
+	return qcom_ice_generate_key(host->ice, lt_key);
+}
+
+static int ufs_qcom_ice_prepare_key(struct blk_crypto_profile *profile,
+				    const u8 *lt_key, size_t lt_key_size,
+				    u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE])
+{
+	struct ufs_hba *hba = ufs_hba_from_crypto_profile(profile);
+	struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
+	return qcom_ice_prepare_key(host->ice, lt_key, lt_key_size, eph_key);
+}
+
 static const struct blk_crypto_ll_ops ufs_qcom_crypto_ops = {
 	.keyslot_program	= ufs_qcom_ice_keyslot_program,
 	.keyslot_evict		= ufs_qcom_ice_keyslot_evict,
+	.derive_sw_secret	= ufs_qcom_ice_derive_sw_secret,
+	.import_key		= ufs_qcom_ice_import_key,
+	.generate_key		= ufs_qcom_ice_generate_key,
+	.prepare_key		= ufs_qcom_ice_prepare_key,
 };
 
 #else
 
 static inline void ufs_qcom_ice_enable(struct ufs_qcom_host *host)
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 05/15] mmc: sdhci-msm: fix crypto key eviction
  2024-12-13  4:19 ` [PATCH v10 05/15] mmc: sdhci-msm: fix crypto key eviction Eric Biggers
@ 2024-12-19 13:48   ` Ulf Hansson
  0 siblings, 0 replies; 26+ messages in thread
From: Ulf Hansson @ 2024-12-19 13:48 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Adrian Hunter, Alim Akhtar,
	Avri Altman, Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Martin K . Petersen, stable, Abel Vesa

On Fri, 13 Dec 2024 at 05:20, Eric Biggers <ebiggers@kernel.org> wrote:
>
> From: Eric Biggers <ebiggers@google.com>
>
> Commit c7eed31e235c ("mmc: sdhci-msm: Switch to the new ICE API")
> introduced an incorrect check of the algorithm ID into the key eviction
> path, and thus qcom_ice_evict_key() is no longer ever called.  Fix it.
>
> Fixes: c7eed31e235c ("mmc: sdhci-msm: Switch to the new ICE API")
> Cc: stable@vger.kernel.org
> Cc: Abel Vesa <abel.vesa@linaro.org>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Applied for fixes, thanks!

Kind regards
Uffe


> ---
>  drivers/mmc/host/sdhci-msm.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
> index e00208535bd1..319f0ebbe652 100644
> --- a/drivers/mmc/host/sdhci-msm.c
> +++ b/drivers/mmc/host/sdhci-msm.c
> @@ -1865,24 +1865,24 @@ static int sdhci_msm_program_key(struct cqhci_host *cq_host,
>         struct sdhci_host *host = mmc_priv(cq_host->mmc);
>         struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
>         struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
>         union cqhci_crypto_cap_entry cap;
>
> +       if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
> +               return qcom_ice_evict_key(msm_host->ice, slot);
> +
>         /* Only AES-256-XTS has been tested so far. */
>         cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
>         if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
>                 cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
>                 return -EINVAL;
>
> -       if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)
> -               return qcom_ice_program_key(msm_host->ice,
> -                                           QCOM_ICE_CRYPTO_ALG_AES_XTS,
> -                                           QCOM_ICE_CRYPTO_KEY_SIZE_256,
> -                                           cfg->crypto_key,
> -                                           cfg->data_unit_size, slot);
> -       else
> -               return qcom_ice_evict_key(msm_host->ice, slot);
> +       return qcom_ice_program_key(msm_host->ice,
> +                                   QCOM_ICE_CRYPTO_ALG_AES_XTS,
> +                                   QCOM_ICE_CRYPTO_KEY_SIZE_256,
> +                                   cfg->crypto_key,
> +                                   cfg->data_unit_size, slot);
>  }
>
>  #else /* CONFIG_MMC_CRYPTO */
>
>  static inline int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
> --
> 2.47.1
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 06/15] mmc: crypto: add mmc_from_crypto_profile()
  2024-12-13  4:19 ` [PATCH v10 06/15] mmc: crypto: add mmc_from_crypto_profile() Eric Biggers
@ 2024-12-19 13:48   ` Ulf Hansson
  0 siblings, 0 replies; 26+ messages in thread
From: Ulf Hansson @ 2024-12-19 13:48 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Adrian Hunter, Alim Akhtar,
	Avri Altman, Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Martin K . Petersen

On Fri, 13 Dec 2024 at 05:20, Eric Biggers <ebiggers@kernel.org> wrote:
>
> From: Eric Biggers <ebiggers@google.com>
>
> Add a helper function that encapsulates a container_of expression.  For
> now there is just one user but soon there will be more.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Applied for next, thanks!

Kind regards
Uffe


> ---
>  drivers/mmc/host/cqhci-crypto.c | 5 +----
>  include/linux/mmc/host.h        | 8 ++++++++
>  2 files changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
> index d5f4b6972f63..2951911d3f78 100644
> --- a/drivers/mmc/host/cqhci-crypto.c
> +++ b/drivers/mmc/host/cqhci-crypto.c
> @@ -23,14 +23,11 @@ static const struct cqhci_crypto_alg_entry {
>  };
>
>  static inline struct cqhci_host *
>  cqhci_host_from_crypto_profile(struct blk_crypto_profile *profile)
>  {
> -       struct mmc_host *mmc =
> -               container_of(profile, struct mmc_host, crypto_profile);
> -
> -       return mmc->cqe_private;
> +       return mmc_from_crypto_profile(profile)->cqe_private;
>  }
>
>  static int cqhci_crypto_program_key(struct cqhci_host *cq_host,
>                                     const union cqhci_crypto_cfg_entry *cfg,
>                                     int slot)
> diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
> index f166d6611ddb..68f09a955a90 100644
> --- a/include/linux/mmc/host.h
> +++ b/include/linux/mmc/host.h
> @@ -588,10 +588,18 @@ static inline void *mmc_priv(struct mmc_host *host)
>  static inline struct mmc_host *mmc_from_priv(void *priv)
>  {
>         return container_of(priv, struct mmc_host, private);
>  }
>
> +#ifdef CONFIG_MMC_CRYPTO
> +static inline struct mmc_host *
> +mmc_from_crypto_profile(struct blk_crypto_profile *profile)
> +{
> +       return container_of(profile, struct mmc_host, crypto_profile);
> +}
> +#endif
> +
>  #define mmc_host_is_spi(host)  ((host)->caps & MMC_CAP_SPI)
>
>  #define mmc_dev(x)     ((x)->parent)
>  #define mmc_classdev(x)        (&(x)->class_dev)
>  #define mmc_hostname(x)        (dev_name(&(x)->class_dev))
> --
> 2.47.1
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 07/15] mmc: sdhci-msm: convert to use custom crypto profile
  2024-12-13  4:19 ` [PATCH v10 07/15] mmc: sdhci-msm: convert to use custom crypto profile Eric Biggers
@ 2024-12-19 13:48   ` Ulf Hansson
  0 siblings, 0 replies; 26+ messages in thread
From: Ulf Hansson @ 2024-12-19 13:48 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Adrian Hunter, Alim Akhtar,
	Avri Altman, Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Martin K . Petersen

On Fri, 13 Dec 2024 at 05:20, Eric Biggers <ebiggers@kernel.org> wrote:
>
> From: Eric Biggers <ebiggers@google.com>
>
> As is being done in ufs-qcom, make the sdhci-msm driver override the
> full crypto profile rather than "just" key programming and eviction.
> This makes it much more straightforward to add support for
> hardware-wrapped inline encryption keys.  It also makes it easy to pass
> the original blk_crypto_key down to qcom_ice_program_key() once it is
> updated to require the key in that form.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Applied for next, thanks!

Kind regards
Uffe


> ---
>  drivers/mmc/host/cqhci-crypto.c | 33 ++++++------
>  drivers/mmc/host/cqhci.h        |  8 ++-
>  drivers/mmc/host/sdhci-msm.c    | 94 ++++++++++++++++++++++++++-------
>  3 files changed, 94 insertions(+), 41 deletions(-)
>
> diff --git a/drivers/mmc/host/cqhci-crypto.c b/drivers/mmc/host/cqhci-crypto.c
> index 2951911d3f78..cb8044093402 100644
> --- a/drivers/mmc/host/cqhci-crypto.c
> +++ b/drivers/mmc/host/cqhci-crypto.c
> @@ -26,20 +26,17 @@ static inline struct cqhci_host *
>  cqhci_host_from_crypto_profile(struct blk_crypto_profile *profile)
>  {
>         return mmc_from_crypto_profile(profile)->cqe_private;
>  }
>
> -static int cqhci_crypto_program_key(struct cqhci_host *cq_host,
> -                                   const union cqhci_crypto_cfg_entry *cfg,
> -                                   int slot)
> +static void cqhci_crypto_program_key(struct cqhci_host *cq_host,
> +                                    const union cqhci_crypto_cfg_entry *cfg,
> +                                    int slot)
>  {
>         u32 slot_offset = cq_host->crypto_cfg_register + slot * sizeof(*cfg);
>         int i;
>
> -       if (cq_host->ops->program_key)
> -               return cq_host->ops->program_key(cq_host, cfg, slot);
> -
>         /* Clear CFGE */
>         cqhci_writel(cq_host, 0, slot_offset + 16 * sizeof(cfg->reg_val[0]));
>
>         /* Write the key */
>         for (i = 0; i < 16; i++) {
> @@ -50,11 +47,10 @@ static int cqhci_crypto_program_key(struct cqhci_host *cq_host,
>         cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[17]),
>                      slot_offset + 17 * sizeof(cfg->reg_val[0]));
>         /* Write dword 16, which includes the new value of CFGE */
>         cqhci_writel(cq_host, le32_to_cpu(cfg->reg_val[16]),
>                      slot_offset + 16 * sizeof(cfg->reg_val[0]));
> -       return 0;
>  }
>
>  static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
>                                         const struct blk_crypto_key *key,
>                                         unsigned int slot)
> @@ -67,11 +63,10 @@ static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
>                         &cqhci_crypto_algs[key->crypto_cfg.crypto_mode];
>         u8 data_unit_mask = key->crypto_cfg.data_unit_size / 512;
>         int i;
>         int cap_idx = -1;
>         union cqhci_crypto_cfg_entry cfg = {};
> -       int err;
>
>         BUILD_BUG_ON(CQHCI_CRYPTO_KEY_SIZE_INVALID != 0);
>         for (i = 0; i < cq_host->crypto_capabilities.num_crypto_cap; i++) {
>                 if (ccap_array[i].algorithm_id == alg->alg &&
>                     ccap_array[i].key_size == alg->key_size &&
> @@ -94,25 +89,26 @@ static int cqhci_crypto_keyslot_program(struct blk_crypto_profile *profile,
>                        key->raw + key->size/2, key->size/2);
>         } else {
>                 memcpy(cfg.crypto_key, key->raw, key->size);
>         }
>
> -       err = cqhci_crypto_program_key(cq_host, &cfg, slot);
> +       cqhci_crypto_program_key(cq_host, &cfg, slot);
>
>         memzero_explicit(&cfg, sizeof(cfg));
> -       return err;
> +       return 0;
>  }
>
>  static int cqhci_crypto_clear_keyslot(struct cqhci_host *cq_host, int slot)
>  {
>         /*
>          * Clear the crypto cfg on the device. Clearing CFGE
>          * might not be sufficient, so just clear the entire cfg.
>          */
>         union cqhci_crypto_cfg_entry cfg = {};
>
> -       return cqhci_crypto_program_key(cq_host, &cfg, slot);
> +       cqhci_crypto_program_key(cq_host, &cfg, slot);
> +       return 0;
>  }
>
>  static int cqhci_crypto_keyslot_evict(struct blk_crypto_profile *profile,
>                                       const struct blk_crypto_key *key,
>                                       unsigned int slot)
> @@ -165,20 +161,22 @@ cqhci_find_blk_crypto_mode(union cqhci_crypto_cap_entry cap)
>  int cqhci_crypto_init(struct cqhci_host *cq_host)
>  {
>         struct mmc_host *mmc = cq_host->mmc;
>         struct device *dev = mmc_dev(mmc);
>         struct blk_crypto_profile *profile = &mmc->crypto_profile;
> -       unsigned int num_keyslots;
>         unsigned int cap_idx;
>         enum blk_crypto_mode_num blk_mode_num;
>         unsigned int slot;
>         int err = 0;
>
>         if (!(mmc->caps2 & MMC_CAP2_CRYPTO) ||
>             !(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS))
>                 goto out;
>
> +       if (cq_host->ops->uses_custom_crypto_profile)
> +               goto profile_initialized;
> +
>         cq_host->crypto_capabilities.reg_val =
>                         cpu_to_le32(cqhci_readl(cq_host, CQHCI_CCAP));
>
>         cq_host->crypto_cfg_register =
>                 (u32)cq_host->crypto_capabilities.config_array_ptr * 0x100;
> @@ -193,13 +191,12 @@ int cqhci_crypto_init(struct cqhci_host *cq_host)
>
>         /*
>          * CCAP.CFGC is off by one, so the actual number of crypto
>          * configurations (a.k.a. keyslots) is CCAP.CFGC + 1.
>          */
> -       num_keyslots = cq_host->crypto_capabilities.config_count + 1;
> -
> -       err = devm_blk_crypto_profile_init(dev, profile, num_keyslots);
> +       err = devm_blk_crypto_profile_init(
> +               dev, profile, cq_host->crypto_capabilities.config_count + 1);
>         if (err)
>                 goto out;
>
>         profile->ll_ops = cqhci_crypto_ops;
>         profile->dev = dev;
> @@ -223,13 +220,15 @@ int cqhci_crypto_init(struct cqhci_host *cq_host)
>                         continue;
>                 profile->modes_supported[blk_mode_num] |=
>                         cq_host->crypto_cap_array[cap_idx].sdus_mask * 512;
>         }
>
> +profile_initialized:
> +
>         /* Clear all the keyslots so that we start in a known state. */
> -       for (slot = 0; slot < num_keyslots; slot++)
> -               cqhci_crypto_clear_keyslot(cq_host, slot);
> +       for (slot = 0; slot < profile->num_slots; slot++)
> +               profile->ll_ops.keyslot_evict(profile, NULL, slot);
>
>         /* CQHCI crypto requires the use of 128-bit task descriptors. */
>         cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
>
>         return 0;
> diff --git a/drivers/mmc/host/cqhci.h b/drivers/mmc/host/cqhci.h
> index fab9d74445ba..ce189a1866b9 100644
> --- a/drivers/mmc/host/cqhci.h
> +++ b/drivers/mmc/host/cqhci.h
> @@ -287,17 +287,15 @@ struct cqhci_host_ops {
>         void (*disable)(struct mmc_host *mmc, bool recovery);
>         void (*update_dcmd_desc)(struct mmc_host *mmc, struct mmc_request *mrq,
>                                  u64 *data);
>         void (*pre_enable)(struct mmc_host *mmc);
>         void (*post_disable)(struct mmc_host *mmc);
> -#ifdef CONFIG_MMC_CRYPTO
> -       int (*program_key)(struct cqhci_host *cq_host,
> -                          const union cqhci_crypto_cfg_entry *cfg, int slot);
> -#endif
>         void (*set_tran_desc)(struct cqhci_host *cq_host, u8 **desc,
>                               dma_addr_t addr, int len, bool end, bool dma64);
> -
> +#ifdef CONFIG_MMC_CRYPTO
> +       bool uses_custom_crypto_profile;
> +#endif
>  };
>
>  static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)
>  {
>         if (unlikely(host->ops->write_l))
> diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
> index 319f0ebbe652..4610f067faca 100644
> --- a/drivers/mmc/host/sdhci-msm.c
> +++ b/drivers/mmc/host/sdhci-msm.c
> @@ -1805,16 +1805,23 @@ static void sdhci_msm_set_clock(struct sdhci_host *host, unsigned int clock)
>   *                                                                           *
>  \*****************************************************************************/
>
>  #ifdef CONFIG_MMC_CRYPTO
>
> +static const struct blk_crypto_ll_ops sdhci_msm_crypto_ops; /* forward decl */
> +
>  static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
>                               struct cqhci_host *cq_host)
>  {
>         struct mmc_host *mmc = msm_host->mmc;
> +       struct blk_crypto_profile *profile = &mmc->crypto_profile;
>         struct device *dev = mmc_dev(mmc);
>         struct qcom_ice *ice;
> +       union cqhci_crypto_capabilities caps;
> +       union cqhci_crypto_cap_entry cap;
> +       int err;
> +       int i;
>
>         if (!(cqhci_readl(cq_host, CQHCI_CAP) & CQHCI_CAP_CS))
>                 return 0;
>
>         ice = of_qcom_ice_get(dev);
> @@ -1825,12 +1832,41 @@ static int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
>
>         if (IS_ERR_OR_NULL(ice))
>                 return PTR_ERR_OR_ZERO(ice);
>
>         msm_host->ice = ice;
> -       mmc->caps2 |= MMC_CAP2_CRYPTO;
>
> +       /* Initialize the blk_crypto_profile */
> +
> +       caps.reg_val = cpu_to_le32(cqhci_readl(cq_host, CQHCI_CCAP));
> +
> +       /* The number of keyslots supported is (CFGC+1) */
> +       err = devm_blk_crypto_profile_init(dev, profile, caps.config_count + 1);
> +       if (err)
> +               return err;
> +
> +       profile->ll_ops = sdhci_msm_crypto_ops;
> +       profile->max_dun_bytes_supported = 4;
> +       profile->dev = dev;
> +
> +       /*
> +        * Currently this driver only supports AES-256-XTS.  All known versions
> +        * of ICE support it, but to be safe make sure it is really declared in
> +        * the crypto capability registers.  The crypto capability registers
> +        * also give the supported data unit size(s).
> +        */
> +       for (i = 0; i < caps.num_crypto_cap; i++) {
> +               cap.reg_val = cpu_to_le32(cqhci_readl(cq_host,
> +                                                     CQHCI_CRYPTOCAP +
> +                                                     i * sizeof(__le32)));
> +               if (cap.algorithm_id == CQHCI_CRYPTO_ALG_AES_XTS &&
> +                   cap.key_size == CQHCI_CRYPTO_KEY_SIZE_256)
> +                       profile->modes_supported[BLK_ENCRYPTION_MODE_AES_256_XTS] |=
> +                               cap.sdus_mask * 512;
> +       }
> +
> +       mmc->caps2 |= MMC_CAP2_CRYPTO;
>         return 0;
>  }
>
>  static void sdhci_msm_ice_enable(struct sdhci_msm_host *msm_host)
>  {
> @@ -1852,39 +1888,59 @@ static __maybe_unused int sdhci_msm_ice_suspend(struct sdhci_msm_host *msm_host)
>                 return qcom_ice_suspend(msm_host->ice);
>
>         return 0;
>  }
>
> -/*
> - * Program a key into a QC ICE keyslot, or evict a keyslot.  QC ICE requires
> - * vendor-specific SCM calls for this; it doesn't support the standard way.
> - */
> -static int sdhci_msm_program_key(struct cqhci_host *cq_host,
> -                                const union cqhci_crypto_cfg_entry *cfg,
> -                                int slot)
> +static inline struct sdhci_msm_host *
> +sdhci_msm_host_from_crypto_profile(struct blk_crypto_profile *profile)
>  {
> -       struct sdhci_host *host = mmc_priv(cq_host->mmc);
> +       struct mmc_host *mmc = mmc_from_crypto_profile(profile);
> +       struct sdhci_host *host = mmc_priv(mmc);
>         struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
>         struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
> -       union cqhci_crypto_cap_entry cap;
>
> -       if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
> -               return qcom_ice_evict_key(msm_host->ice, slot);
> +       return msm_host;
> +}
> +
> +/*
> + * Program a key into a QC ICE keyslot.  QC ICE requires a QC-specific SCM call
> + * for this; it doesn't support the standard way.
> + */
> +static int sdhci_msm_ice_keyslot_program(struct blk_crypto_profile *profile,
> +                                        const struct blk_crypto_key *key,
> +                                        unsigned int slot)
> +{
> +       struct sdhci_msm_host *msm_host =
> +               sdhci_msm_host_from_crypto_profile(profile);
>
>         /* Only AES-256-XTS has been tested so far. */
> -       cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
> -       if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
> -               cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
> -               return -EINVAL;
> +       if (key->crypto_cfg.crypto_mode != BLK_ENCRYPTION_MODE_AES_256_XTS)
> +               return -EOPNOTSUPP;
>
>         return qcom_ice_program_key(msm_host->ice,
>                                     QCOM_ICE_CRYPTO_ALG_AES_XTS,
>                                     QCOM_ICE_CRYPTO_KEY_SIZE_256,
> -                                   cfg->crypto_key,
> -                                   cfg->data_unit_size, slot);
> +                                   key->raw,
> +                                   key->crypto_cfg.data_unit_size / 512,
> +                                   slot);
>  }
>
> +static int sdhci_msm_ice_keyslot_evict(struct blk_crypto_profile *profile,
> +                                      const struct blk_crypto_key *key,
> +                                      unsigned int slot)
> +{
> +       struct sdhci_msm_host *msm_host =
> +               sdhci_msm_host_from_crypto_profile(profile);
> +
> +       return qcom_ice_evict_key(msm_host->ice, slot);
> +}
> +
> +static const struct blk_crypto_ll_ops sdhci_msm_crypto_ops = {
> +       .keyslot_program        = sdhci_msm_ice_keyslot_program,
> +       .keyslot_evict          = sdhci_msm_ice_keyslot_evict,
> +};
> +
>  #else /* CONFIG_MMC_CRYPTO */
>
>  static inline int sdhci_msm_ice_init(struct sdhci_msm_host *msm_host,
>                                      struct cqhci_host *cq_host)
>  {
> @@ -1986,11 +2042,11 @@ static void sdhci_msm_set_timeout(struct sdhci_host *host, struct mmc_command *c
>
>  static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
>         .enable         = sdhci_msm_cqe_enable,
>         .disable        = sdhci_msm_cqe_disable,
>  #ifdef CONFIG_MMC_CRYPTO
> -       .program_key    = sdhci_msm_program_key,
> +       .uses_custom_crypto_profile = true,
>  #endif
>  };
>
>  static int sdhci_msm_cqe_add_host(struct sdhci_host *host,
>                                 struct platform_device *pdev)
> --
> 2.47.1
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (14 preceding siblings ...)
  2024-12-13  4:19 ` [PATCH v10 15/15] ufs: qcom: add support for wrapped keys Eric Biggers
@ 2025-01-02 18:38 ` Eric Biggers
  2025-01-02 18:40 ` Martin K. Petersen
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2025-01-02 18:38 UTC (permalink / raw)
  To: Martin K. Petersen, Bjorn Andersson
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Adrian Hunter, Alim Akhtar,
	Avri Altman, Bart Van Assche, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Ulf Hansson

On Thu, Dec 12, 2024 at 08:19:43PM -0800, Eric Biggers wrote:
> Maintainers, please consider merging the following preparatory patches for 6.14:
> 
>   - UFS / SCSI tree: patches 1-4
>   - MMC tree: patches 5-7
>   - Qualcomm / MSM tree: patch 8

Happy new year everyone.

We are 1 of 3 so far, with Ulf having applied patches 5-7.

Martin, can you consider applying patches 1-4?

Bjorn, can you consider applying patch 8?

Additional reviews or acks from anyone on any of the patches in this series
would always be appreciated, of course.

Thank you!

- Eric

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (15 preceding siblings ...)
  2025-01-02 18:38 ` [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
@ 2025-01-02 18:40 ` Martin K. Petersen
  2025-01-02 18:44   ` Eric Biggers
  2025-01-09 18:27 ` (subset) " Bjorn Andersson
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 26+ messages in thread
From: Martin K. Petersen @ 2025-01-02 18:40 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Adrian Hunter, Alim Akhtar,
	Avri Altman, Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Martin K . Petersen, Ulf Hansson


Eric,

> This patchset adds support for hardware-wrapped inline encryption
> keys, a security feature supported by some SoCs. It adds the block and
> fscrypt framework for the feature as well as support for it with UFS
> on Qualcomm SoCs.

Applied patches 1-4 to 6.14/scsi-staging, thanks!

I had originally queued patch 1 in 6.13/scsi-fixes but moved it to 6.14
and kept the stable tag to accommodate the rest of the series. Hope
that's OK given the short runway we have left for this release.

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2025-01-02 18:40 ` Martin K. Petersen
@ 2025-01-02 18:44   ` Eric Biggers
  0 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2025-01-02 18:44 UTC (permalink / raw)
  To: Martin K. Petersen
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Adrian Hunter, Alim Akhtar,
	Avri Altman, Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Ulf Hansson

On Thu, Jan 02, 2025 at 01:40:48PM -0500, Martin K. Petersen wrote:
> 
> Eric,
> 
> > This patchset adds support for hardware-wrapped inline encryption
> > keys, a security feature supported by some SoCs. It adds the block and
> > fscrypt framework for the feature as well as support for it with UFS
> > on Qualcomm SoCs.
> 
> Applied patches 1-4 to 6.14/scsi-staging, thanks!
> 
> I had originally queued patch 1 in 6.13/scsi-fixes but moved it to 6.14
> and kept the stable tag to accommodate the rest of the series. Hope
> that's OK given the short runway we have left for this release.

Yes, that's fine.  Thanks.

- Eric

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: (subset) [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (16 preceding siblings ...)
  2025-01-02 18:40 ` Martin K. Petersen
@ 2025-01-09 18:27 ` Bjorn Andersson
  2025-01-10  8:44 ` Bartosz Golaszewski
  2025-01-10 21:16 ` (subset) " Martin K. Petersen
  19 siblings, 0 replies; 26+ messages in thread
From: Bjorn Andersson @ 2025-01-09 18:27 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Eric Biggers
  Cc: Adrian Hunter, Alim Akhtar, Avri Altman, Bart Van Assche,
	Dmitry Baryshkov, James E . J . Bottomley, Jens Axboe,
	Konrad Dybcio, Manivannan Sadhasivam, Martin K . Petersen,
	Ulf Hansson


On Thu, 12 Dec 2024 20:19:43 -0800, Eric Biggers wrote:
> This patchset is based on next-20241212 and is also available in git via:
> 
>     git fetch https://git.kernel.org/pub/scm/fs/fscrypt/linux.git wrapped-keys-v10
> 
> This patchset adds support for hardware-wrapped inline encryption keys, a
> security feature supported by some SoCs.  It adds the block and fscrypt
> framework for the feature as well as support for it with UFS on Qualcomm SoCs.
> 
> [...]

Applied, thanks!

[08/15] firmware: qcom: scm: add calls for wrapped key support
        commit: 1d45a1cd9f3ae849db868e07e5fee5e5b37eff55

Best regards,
-- 
Bjorn Andersson <andersson@kernel.org>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (17 preceding siblings ...)
  2025-01-09 18:27 ` (subset) " Bjorn Andersson
@ 2025-01-10  8:44 ` Bartosz Golaszewski
  2025-01-10 19:10   ` Eric Biggers
  2025-01-10 21:16 ` (subset) " Martin K. Petersen
  19 siblings, 1 reply; 26+ messages in thread
From: Bartosz Golaszewski @ 2025-01-10  8:44 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Gaurav Kashyap, Adrian Hunter, Alim Akhtar, Avri Altman,
	Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Martin K . Petersen, Ulf Hansson

On Fri, Dec 13, 2024 at 5:20 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> This patchset is based on next-20241212 and is also available in git via:
>
>     git fetch https://git.kernel.org/pub/scm/fs/fscrypt/linux.git wrapped-keys-v10
>
> This patchset adds support for hardware-wrapped inline encryption keys, a
> security feature supported by some SoCs.  It adds the block and fscrypt
> framework for the feature as well as support for it with UFS on Qualcomm SoCs.
>
> This feature is described in full detail in the included Documentation changes.
> But to summarize, hardware-wrapped keys are inline encryption keys that are
> wrapped (encrypted) by a key internal to the hardware so that they can only be
> unwrapped (decrypted) by the hardware.  Initially keys are wrapped with a
> permanent hardware key, but during actual use they are re-wrapped with a
> per-boot ephemeral key for improved security.  The hardware supports importing
> keys as well as generating keys itself.
>
> This differs from the existing support for hardware-wrapped keys in the kernel
> crypto API (also called "hardware-bound keys" in some places) in the same way
> that the crypto API differs from blk-crypto: the crypto API is for general
> crypto operations, whereas blk-crypto is for inline storage encryption.
>
> This feature is already being used by Android downstream for several years
> (https://source.android.com/docs/security/features/encryption/hw-wrapped-keys),
> but on other platforms userspace support will be provided via fscryptctl and
> tests via xfstests (I have some old patches for this that need to be updated).
>
> Maintainers, please consider merging the following preparatory patches for 6.14:
>
>   - UFS / SCSI tree: patches 1-4
>   - MMC tree: patches 5-7
>   - Qualcomm / MSM tree: patch 8
>

IIUC The following patches will have to wait for the v6.15 cycle?

[PATCH v10 9/15] soc: qcom: ice: make qcom_ice_program_key() take
struct blk_crypto_key
[PATCH v10 10/15] blk-crypto: add basic hardware-wrapped key support
[PATCH v10 11/15] blk-crypto: show supported key types in sysfs
[PATCH v10 12/15] blk-crypto: add ioctls to create and prepare
hardware-wrapped keys
[PATCH v10 13/15] fscrypt: add support for hardware-wrapped keys
[PATCH v10 14/15] soc: qcom: ice: add HWKM support to the ICE driver
[PATCH v10 15/15] ufs: qcom: add support for wrapped keys

Bartosz

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2025-01-10  8:44 ` Bartosz Golaszewski
@ 2025-01-10 19:10   ` Eric Biggers
  0 siblings, 0 replies; 26+ messages in thread
From: Eric Biggers @ 2025-01-10 19:10 UTC (permalink / raw)
  To: Bartosz Golaszewski
  Cc: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Gaurav Kashyap, Adrian Hunter, Alim Akhtar, Avri Altman,
	Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Martin K . Petersen, Ulf Hansson

On Fri, Jan 10, 2025 at 09:44:07AM +0100, Bartosz Golaszewski wrote:
> On Fri, Dec 13, 2024 at 5:20 AM Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > This patchset is based on next-20241212 and is also available in git via:
> >
> >     git fetch https://git.kernel.org/pub/scm/fs/fscrypt/linux.git wrapped-keys-v10
> >
> > This patchset adds support for hardware-wrapped inline encryption keys, a
> > security feature supported by some SoCs.  It adds the block and fscrypt
> > framework for the feature as well as support for it with UFS on Qualcomm SoCs.
> >
> > This feature is described in full detail in the included Documentation changes.
> > But to summarize, hardware-wrapped keys are inline encryption keys that are
> > wrapped (encrypted) by a key internal to the hardware so that they can only be
> > unwrapped (decrypted) by the hardware.  Initially keys are wrapped with a
> > permanent hardware key, but during actual use they are re-wrapped with a
> > per-boot ephemeral key for improved security.  The hardware supports importing
> > keys as well as generating keys itself.
> >
> > This differs from the existing support for hardware-wrapped keys in the kernel
> > crypto API (also called "hardware-bound keys" in some places) in the same way
> > that the crypto API differs from blk-crypto: the crypto API is for general
> > crypto operations, whereas blk-crypto is for inline storage encryption.
> >
> > This feature is already being used by Android downstream for several years
> > (https://source.android.com/docs/security/features/encryption/hw-wrapped-keys),
> > but on other platforms userspace support will be provided via fscryptctl and
> > tests via xfstests (I have some old patches for this that need to be updated).
> >
> > Maintainers, please consider merging the following preparatory patches for 6.14:
> >
> >   - UFS / SCSI tree: patches 1-4
> >   - MMC tree: patches 5-7
> >   - Qualcomm / MSM tree: patch 8
> >
> 
> IIUC The following patches will have to wait for the v6.15 cycle?
> 
> [PATCH v10 9/15] soc: qcom: ice: make qcom_ice_program_key() take
> struct blk_crypto_key
> [PATCH v10 10/15] blk-crypto: add basic hardware-wrapped key support
> [PATCH v10 11/15] blk-crypto: show supported key types in sysfs
> [PATCH v10 12/15] blk-crypto: add ioctls to create and prepare
> hardware-wrapped keys
> [PATCH v10 13/15] fscrypt: add support for hardware-wrapped keys
> [PATCH v10 14/15] soc: qcom: ice: add HWKM support to the ICE driver
> [PATCH v10 15/15] ufs: qcom: add support for wrapped keys

Yes, that's correct.

- Eric

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: (subset) [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys
  2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
                   ` (18 preceding siblings ...)
  2025-01-10  8:44 ` Bartosz Golaszewski
@ 2025-01-10 21:16 ` Martin K. Petersen
  19 siblings, 0 replies; 26+ messages in thread
From: Martin K. Petersen @ 2025-01-10 21:16 UTC (permalink / raw)
  To: linux-block, linux-fscrypt, linux-mmc, linux-scsi, linux-arm-msm,
	Bartosz Golaszewski, Gaurav Kashyap, Eric Biggers
  Cc: Martin K . Petersen, Adrian Hunter, Alim Akhtar, Avri Altman,
	Bart Van Assche, Bjorn Andersson, Dmitry Baryshkov,
	James E . J . Bottomley, Jens Axboe, Konrad Dybcio,
	Manivannan Sadhasivam, Ulf Hansson

On Thu, 12 Dec 2024 20:19:43 -0800, Eric Biggers wrote:

> This patchset is based on next-20241212 and is also available in git via:
> 
>     git fetch https://git.kernel.org/pub/scm/fs/fscrypt/linux.git wrapped-keys-v10
> 
> This patchset adds support for hardware-wrapped inline encryption keys, a
> security feature supported by some SoCs.  It adds the block and fscrypt
> framework for the feature as well as support for it with UFS on Qualcomm SoCs.
> 
> [...]

Applied to 6.14/scsi-queue, thanks!

[02/15] ufs: crypto: add ufs_hba_from_crypto_profile()
        https://git.kernel.org/mkp/scsi/c/75d0c649eca4
[03/15] ufs: qcom: convert to use UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE
        https://git.kernel.org/mkp/scsi/c/30b32c647cf3
[04/15] ufs: crypto: remove ufs_hba_variant_ops::program_key
        https://git.kernel.org/mkp/scsi/c/409f21010d92

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2025-01-10 21:17 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-13  4:19 [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
2024-12-13  4:19 ` [PATCH v10 01/15] ufs: qcom: fix crypto key eviction Eric Biggers
2024-12-13  4:19 ` [PATCH v10 02/15] ufs: crypto: add ufs_hba_from_crypto_profile() Eric Biggers
2024-12-13  4:19 ` [PATCH v10 03/15] ufs: qcom: convert to use UFSHCD_QUIRK_CUSTOM_CRYPTO_PROFILE Eric Biggers
2024-12-13  4:19 ` [PATCH v10 04/15] ufs: crypto: remove ufs_hba_variant_ops::program_key Eric Biggers
2024-12-13  4:19 ` [PATCH v10 05/15] mmc: sdhci-msm: fix crypto key eviction Eric Biggers
2024-12-19 13:48   ` Ulf Hansson
2024-12-13  4:19 ` [PATCH v10 06/15] mmc: crypto: add mmc_from_crypto_profile() Eric Biggers
2024-12-19 13:48   ` Ulf Hansson
2024-12-13  4:19 ` [PATCH v10 07/15] mmc: sdhci-msm: convert to use custom crypto profile Eric Biggers
2024-12-19 13:48   ` Ulf Hansson
2024-12-13  4:19 ` [PATCH v10 08/15] firmware: qcom: scm: add calls for wrapped key support Eric Biggers
2024-12-13  4:19 ` [PATCH v10 09/15] soc: qcom: ice: make qcom_ice_program_key() take struct blk_crypto_key Eric Biggers
2024-12-13  4:19 ` [PATCH v10 10/15] blk-crypto: add basic hardware-wrapped key support Eric Biggers
2024-12-13  4:19 ` [PATCH v10 11/15] blk-crypto: show supported key types in sysfs Eric Biggers
2024-12-13  4:19 ` [PATCH v10 12/15] blk-crypto: add ioctls to create and prepare hardware-wrapped keys Eric Biggers
2024-12-13  4:19 ` [PATCH v10 13/15] fscrypt: add support for " Eric Biggers
2024-12-13  4:19 ` [PATCH v10 14/15] soc: qcom: ice: add HWKM support to the ICE driver Eric Biggers
2024-12-13  4:19 ` [PATCH v10 15/15] ufs: qcom: add support for wrapped keys Eric Biggers
2025-01-02 18:38 ` [PATCH v10 00/15] Support for hardware-wrapped inline encryption keys Eric Biggers
2025-01-02 18:40 ` Martin K. Petersen
2025-01-02 18:44   ` Eric Biggers
2025-01-09 18:27 ` (subset) " Bjorn Andersson
2025-01-10  8:44 ` Bartosz Golaszewski
2025-01-10 19:10   ` Eric Biggers
2025-01-10 21:16 ` (subset) " Martin K. Petersen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).