* [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
@ 2025-07-15 20:10 Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
` (8 more replies)
0 siblings, 9 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
Hi Jens,
When using the crypto fallback code, large bios are split twice. A first
time by bio_split_to_limits() and a second time by the crypto fallback
code. This causes bios not to be submitted in LBA error and hence triggers
write errors for zoned block devices. This patch series fixes this by
splitting bios once. Please consider this patch series for the next merge
window.
Thanks,
Bart.
Changes compared to v2:
- Added a patch that optimizes blk_crypto_max_io_size().
- Added three patches that change calling conventions in the crypto fallback
code.
- Added a patch to remove crypto_bio_split.
- Moved the blk_crypto_max_io_size() call into get_max_io_size().
Changes compared to v1:
- Dropped support for bio-based drivers that do not call bio_split_to_limits().
- Removed the BLK_FEAT_CALLS_BIO_SPLIT_TO_LIMITS and BIO_HAS_BEEN_SPLIT flags.
Bart Van Assche (7):
block: Improve blk_crypto_fallback_split_bio_if_needed()
block: Split blk_crypto_fallback_split_bio_if_needed()
block: Modify the blk_crypto_bio_prep() calling convention
block: Modify the blk_crypto_fallback_bio_prep() calling convention
block: Change the blk_crypto_fallback_encrypt_bio() calling convention
block: Rework splitting of encrypted bios
block, crypto: Remove crypto_bio_split
block/blk-core.c | 3 --
block/blk-crypto-fallback.c | 85 ++++++++++++++-----------------------
block/blk-crypto-internal.h | 25 +++++++----
block/blk-crypto.c | 26 ++++++------
block/blk-merge.c | 9 +++-
5 files changed, 68 insertions(+), 80 deletions(-)
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed()
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-16 11:49 ` John Garry
2025-07-15 20:10 ` [PATCH v3 2/7] block: Split blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
` (7 subsequent siblings)
8 siblings, 1 reply; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche,
John Garry, Eric Biggers
Remove the assumption that bv_len is a multiple of 512 bytes since this is
not guaranteed by the block layer. This assumption may cause this function
to return a smaller value than it should. This is harmless from a
correctness point of view but may result in suboptimal performance.
Note: unsigned int is sufficient for num_bytes since bio_for_each_segment()
yields at most one page per iteration and since PAGE_SIZE * BIO_MAX_VECS
fits in an unsigned int.
Suggested-by: John Garry <john.g.garry@oracle.com>
Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-crypto-fallback.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 005c9157ffb3..3c914d2c054f 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -213,15 +213,16 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
{
struct bio *bio = *bio_ptr;
unsigned int i = 0;
- unsigned int num_sectors = 0;
+ unsigned int num_bytes = 0, num_sectors;
struct bio_vec bv;
struct bvec_iter iter;
bio_for_each_segment(bv, bio, iter) {
- num_sectors += bv.bv_len >> SECTOR_SHIFT;
+ num_bytes += bv.bv_len;
if (++i == BIO_MAX_VECS)
break;
}
+ num_sectors = num_bytes >> SECTOR_SHIFT;
if (num_sectors < bio_sectors(bio)) {
struct bio *split_bio;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v3 2/7] block: Split blk_crypto_fallback_split_bio_if_needed()
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 3/7] block: Modify the blk_crypto_bio_prep() calling convention Bart Van Assche
` (6 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
Prepare for calling blk_crypto_max_io_size() from the bio splitting code.
No functionality has been changed.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-crypto-fallback.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 3c914d2c054f..98bba0cf89cc 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -209,11 +209,15 @@ blk_crypto_fallback_alloc_cipher_req(struct blk_crypto_keyslot *slot,
return true;
}
-static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
+/*
+ * The encryption fallback code allocates bounce pages individually. This limits
+ * the bio size supported by the encryption fallback code. This function
+ * calculates the upper limit for the bio size.
+ */
+static unsigned int blk_crypto_max_io_size(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
unsigned int i = 0;
- unsigned int num_bytes = 0, num_sectors;
+ unsigned int num_bytes = 0;
struct bio_vec bv;
struct bvec_iter iter;
@@ -222,7 +226,14 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
if (++i == BIO_MAX_VECS)
break;
}
- num_sectors = num_bytes >> SECTOR_SHIFT;
+ return num_bytes >> SECTOR_SHIFT;
+}
+
+static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
+{
+ struct bio *bio = *bio_ptr;
+ unsigned int num_sectors = blk_crypto_max_io_size(bio);
+
if (num_sectors < bio_sectors(bio)) {
struct bio *split_bio;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v3 3/7] block: Modify the blk_crypto_bio_prep() calling convention
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 2/7] block: Split blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 4/7] block: Modify the blk_crypto_fallback_bio_prep() " Bart Van Assche
` (5 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
Return a bio pointer instead of accepting a struct bio ** argument.
No functionality has been changed.
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-core.c | 3 ++-
block/blk-crypto-internal.h | 10 +++++-----
block/blk-crypto.c | 24 +++++++++++-------------
3 files changed, 18 insertions(+), 19 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index fdac48aec5ef..2c3c8576aa9b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -626,7 +626,8 @@ static void __submit_bio(struct bio *bio)
/* If plug is not used, add new plug here to cache nsecs time. */
struct blk_plug plug;
- if (unlikely(!blk_crypto_bio_prep(&bio)))
+ bio = blk_crypto_bio_prep(bio);
+ if (unlikely(!bio))
return;
blk_start_plug(&plug);
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index ccf6dff6ff6b..e2fd5a3221d3 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -165,12 +165,12 @@ static inline void bio_crypt_do_front_merge(struct request *rq,
#endif
}
-bool __blk_crypto_bio_prep(struct bio **bio_ptr);
-static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
+struct bio *__blk_crypto_bio_prep(struct bio *bio);
+static inline struct bio *blk_crypto_bio_prep(struct bio *bio)
{
- if (bio_has_crypt_ctx(*bio_ptr))
- return __blk_crypto_bio_prep(bio_ptr);
- return true;
+ if (bio_has_crypt_ctx(bio))
+ return __blk_crypto_bio_prep(bio);
+ return bio;
}
blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq);
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 4b1ad84d1b5a..84efb65fc51c 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -261,7 +261,7 @@ void __blk_crypto_free_request(struct request *rq)
/**
* __blk_crypto_bio_prep - Prepare bio for inline encryption
*
- * @bio_ptr: pointer to original bio pointer
+ * @bio: bio to be encrypted
*
* If the bio crypt context provided for the bio is supported by the underlying
* device's inline encryption hardware, do nothing.
@@ -271,18 +271,16 @@ void __blk_crypto_free_request(struct request *rq)
* blk-crypto may choose to split the bio into 2 - the first one that will
* continue to be processed and the second one that will be resubmitted via
* submit_bio_noacct. A bounce bio will be allocated to encrypt the contents
- * of the aforementioned "first one", and *bio_ptr will be updated to this
- * bounce bio.
+ * of the aforementioned "first one". The pointer to this bio will be returned.
*
* Caller must ensure bio has bio_crypt_ctx.
*
- * Return: true on success; false on error (and bio->bi_status will be set
- * appropriately, and bio_endio() will have been called so bio
- * submission should abort).
+ * Return: encrypted bio on success or %NULL on error. Additionally, if an error
+ * occurred, bio->bi_status will be set and bio_endio() will have been
+ * called.
*/
-bool __blk_crypto_bio_prep(struct bio **bio_ptr)
+struct bio *__blk_crypto_bio_prep(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key;
/* Error if bio has no data. */
@@ -302,12 +300,12 @@ bool __blk_crypto_bio_prep(struct bio **bio_ptr)
*/
if (blk_crypto_config_supported_natively(bio->bi_bdev,
&bc_key->crypto_cfg))
- return true;
- if (blk_crypto_fallback_bio_prep(bio_ptr))
- return true;
+ return bio;
+ if (blk_crypto_fallback_bio_prep(&bio))
+ return bio;
fail:
- bio_endio(*bio_ptr);
- return false;
+ bio_endio(bio);
+ return NULL;
}
int __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v3 4/7] block: Modify the blk_crypto_fallback_bio_prep() calling convention
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
` (2 preceding siblings ...)
2025-07-15 20:10 ` [PATCH v3 3/7] block: Modify the blk_crypto_bio_prep() calling convention Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 5/7] block: Change the blk_crypto_fallback_encrypt_bio() " Bart Van Assche
` (4 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
Return a bio pointer instead of accepting a struct bio ** argument. No
functionality has been changed.
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-crypto-fallback.c | 16 ++++++++--------
block/blk-crypto-internal.h | 8 ++++----
block/blk-crypto.c | 6 ++++--
3 files changed, 16 insertions(+), 14 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 98bba0cf89cc..bd668a52817d 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -484,7 +484,7 @@ static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
/**
* blk_crypto_fallback_bio_prep - Prepare a bio to use fallback en/decryption
*
- * @bio_ptr: pointer to the bio to prepare
+ * @bio: bio to prepare
*
* If bio is doing a WRITE operation, this splits the bio into two parts if it's
* too big (see blk_crypto_fallback_split_bio_if_needed()). It then allocates a
@@ -499,28 +499,28 @@ static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
* of the stack except for blk-integrity (blk-integrity and blk-crypto are not
* currently supported together).
*
- * Return: true on success. Sets bio->bi_status and returns false on error.
+ * Return: a bio pointer on success; %NULL upon failure. Sets bio->bi_status on
+ * error.
*/
-bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+struct bio *blk_crypto_fallback_bio_prep(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
struct bio_crypt_ctx *bc = bio->bi_crypt_context;
struct bio_fallback_crypt_ctx *f_ctx;
if (WARN_ON_ONCE(!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode])) {
/* User didn't call blk_crypto_start_using_key() first */
bio->bi_status = BLK_STS_IOERR;
- return false;
+ return NULL;
}
if (!__blk_crypto_cfg_supported(blk_crypto_fallback_profile,
&bc->bc_key->crypto_cfg)) {
bio->bi_status = BLK_STS_NOTSUPP;
- return false;
+ return NULL;
}
if (bio_data_dir(bio) == WRITE)
- return blk_crypto_fallback_encrypt_bio(bio_ptr);
+ return blk_crypto_fallback_encrypt_bio(&bio) ? bio : NULL;
/*
* bio READ case: Set up a f_ctx in the bio's bi_private and set the
@@ -535,7 +535,7 @@ bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
bio->bi_end_io = blk_crypto_fallback_decrypt_endio;
bio_crypt_free_ctx(bio);
- return true;
+ return bio;
}
int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index e2fd5a3221d3..212e5bbfc95f 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -219,7 +219,7 @@ static inline int blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num);
-bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr);
+struct bio *blk_crypto_fallback_bio_prep(struct bio *bio);
int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
@@ -232,11 +232,11 @@ blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
return -ENOPKG;
}
-static inline bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+static inline struct bio *blk_crypto_fallback_bio_prep(struct bio *bio)
{
pr_warn_once("crypto API fallback disabled; failing request.\n");
- (*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
- return false;
+ bio->bi_status = BLK_STS_NOTSUPP;
+ return NULL;
}
static inline int
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 84efb65fc51c..4bf8c212aee8 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -281,6 +281,7 @@ void __blk_crypto_free_request(struct request *rq)
*/
struct bio *__blk_crypto_bio_prep(struct bio *bio)
{
+ struct bio *new_bio;
const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key;
/* Error if bio has no data. */
@@ -301,8 +302,9 @@ struct bio *__blk_crypto_bio_prep(struct bio *bio)
if (blk_crypto_config_supported_natively(bio->bi_bdev,
&bc_key->crypto_cfg))
return bio;
- if (blk_crypto_fallback_bio_prep(&bio))
- return bio;
+ new_bio = blk_crypto_fallback_bio_prep(bio);
+ if (new_bio)
+ return new_bio;
fail:
bio_endio(bio);
return NULL;
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v3 5/7] block: Change the blk_crypto_fallback_encrypt_bio() calling convention
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
` (3 preceding siblings ...)
2025-07-15 20:10 ` [PATCH v3 4/7] block: Modify the blk_crypto_fallback_bio_prep() " Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 6/7] block: Rework splitting of encrypted bios Bart Van Assche
` (3 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
Return a bio pointer instead of accepting a struct bio ** argument. No
functionality has been changed.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-crypto-fallback.c | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index bd668a52817d..ba5f1c887574 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -268,13 +268,12 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
/*
* The crypto API fallback's encryption routine.
* Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
- * and replace *bio_ptr with the bounce bio. May split input bio if it's too
- * large. Returns true on success. Returns false and sets bio->bi_status on
- * error.
+ * and return the bounce bio. May split input bio if it's too large. Returns the
+ * bounce bio on success. Returns %NULL and sets bio->bi_status on error.
*/
-static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
+static struct bio *blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
{
- struct bio *src_bio, *enc_bio;
+ struct bio *enc_bio, *ret = NULL;
struct bio_crypt_ctx *bc;
struct blk_crypto_keyslot *slot;
int data_unit_size;
@@ -284,14 +283,12 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
struct scatterlist src, dst;
union blk_crypto_iv iv;
unsigned int i, j;
- bool ret = false;
blk_status_t blk_st;
/* Split the bio if it's too big for single page bvec */
- if (!blk_crypto_fallback_split_bio_if_needed(bio_ptr))
- return false;
+ if (!blk_crypto_fallback_split_bio_if_needed(&src_bio))
+ return NULL;
- src_bio = *bio_ptr;
bc = src_bio->bi_crypt_context;
data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
@@ -299,7 +296,7 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
enc_bio = blk_crypto_fallback_clone_bio(src_bio);
if (!enc_bio) {
src_bio->bi_status = BLK_STS_RESOURCE;
- return false;
+ return NULL;
}
/*
@@ -362,9 +359,7 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
enc_bio->bi_private = src_bio;
enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
- *bio_ptr = enc_bio;
- ret = true;
-
+ ret = enc_bio;
enc_bio = NULL;
goto out_free_ciph_req;
@@ -520,7 +515,7 @@ struct bio *blk_crypto_fallback_bio_prep(struct bio *bio)
}
if (bio_data_dir(bio) == WRITE)
- return blk_crypto_fallback_encrypt_bio(&bio) ? bio : NULL;
+ return blk_crypto_fallback_encrypt_bio(bio);
/*
* bio READ case: Set up a f_ctx in the bio's bi_private and set the
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v3 6/7] block: Rework splitting of encrypted bios
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
` (4 preceding siblings ...)
2025-07-15 20:10 ` [PATCH v3 5/7] block: Change the blk_crypto_fallback_encrypt_bio() " Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 7/7] block, crypto: Remove crypto_bio_split Bart Van Assche
` (2 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
Modify splitting of encrypted bios as follows:
- Stop calling blk_crypto_bio_prep() for bio-based block drivers that do not
call bio_split_to_limits().
- For request-based block drivers and for bio-based block drivers that call
bio_split_to_limits(), call blk_crypto_bio_prep() after bio splitting
happened instead of before bio splitting happened.
- In bio_split_rw(), restrict the bio size to the smaller size of what is
supported by the block driver and the crypto fallback code.
The advantages of these changes are as follows:
- This patch fixes write errors on zoned storage caused by out-of-order
submission of bios. This out-of-order submission happens if both the
crypto fallback code and bio_split_to_limits() split a bio.
- Less code duplication. The crypto fallback code now calls
bio_split_to_limits() instead of open-coding it.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-core.c | 4 ----
block/blk-crypto-fallback.c | 41 +++++++++----------------------------
block/blk-crypto-internal.h | 7 +++++++
block/blk-merge.c | 9 ++++++--
4 files changed, 24 insertions(+), 37 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 2c3c8576aa9b..5af5f8c3cd06 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -626,10 +626,6 @@ static void __submit_bio(struct bio *bio)
/* If plug is not used, add new plug here to cache nsecs time. */
struct blk_plug plug;
- bio = blk_crypto_bio_prep(bio);
- if (unlikely(!bio))
- return;
-
blk_start_plug(&plug);
if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index ba5f1c887574..b54ad41e4192 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -214,7 +214,7 @@ blk_crypto_fallback_alloc_cipher_req(struct blk_crypto_keyslot *slot,
* the bio size supported by the encryption fallback code. This function
* calculates the upper limit for the bio size.
*/
-static unsigned int blk_crypto_max_io_size(struct bio *bio)
+unsigned int blk_crypto_max_io_size(struct bio *bio)
{
unsigned int i = 0;
unsigned int num_bytes = 0;
@@ -229,28 +229,6 @@ static unsigned int blk_crypto_max_io_size(struct bio *bio)
return num_bytes >> SECTOR_SHIFT;
}
-static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
-{
- struct bio *bio = *bio_ptr;
- unsigned int num_sectors = blk_crypto_max_io_size(bio);
-
- if (num_sectors < bio_sectors(bio)) {
- struct bio *split_bio;
-
- split_bio = bio_split(bio, num_sectors, GFP_NOIO,
- &crypto_bio_split);
- if (IS_ERR(split_bio)) {
- bio->bi_status = BLK_STS_RESOURCE;
- return false;
- }
- bio_chain(split_bio, bio);
- submit_bio_noacct(bio);
- *bio_ptr = split_bio;
- }
-
- return true;
-}
-
union blk_crypto_iv {
__le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
u8 bytes[BLK_CRYPTO_MAX_IV_SIZE];
@@ -268,8 +246,8 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
/*
* The crypto API fallback's encryption routine.
* Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
- * and return the bounce bio. May split input bio if it's too large. Returns the
- * bounce bio on success. Returns %NULL and sets bio->bi_status on error.
+ * and return the bounce bio. Returns the bounce bio on success. Returns %NULL
+ * and sets bio->bi_status on error.
*/
static struct bio *blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
{
@@ -285,9 +263,12 @@ static struct bio *blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
unsigned int i, j;
blk_status_t blk_st;
- /* Split the bio if it's too big for single page bvec */
- if (!blk_crypto_fallback_split_bio_if_needed(&src_bio))
+ /* Verify that bio splitting has occurred. */
+ if (WARN_ON_ONCE(bio_sectors(src_bio) >
+ blk_crypto_max_io_size(src_bio))) {
+ src_bio->bi_status = BLK_STS_IOERR;
return NULL;
+ }
bc = src_bio->bi_crypt_context;
data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
@@ -481,10 +462,8 @@ static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
*
* @bio: bio to prepare
*
- * If bio is doing a WRITE operation, this splits the bio into two parts if it's
- * too big (see blk_crypto_fallback_split_bio_if_needed()). It then allocates a
- * bounce bio for the first part, encrypts it, and updates bio_ptr to point to
- * the bounce bio.
+ * For WRITE operations, a bounce bio is allocated, encrypted, and *bio_ptr is
+ * updated to point to the bounce bio.
*
* For a READ operation, we mark the bio for decryption by using bi_private and
* bi_end_io.
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index 212e5bbfc95f..920cfc14c244 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -223,6 +223,8 @@ struct bio *blk_crypto_fallback_bio_prep(struct bio *bio);
int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
+unsigned int blk_crypto_max_io_size(struct bio *bio);
+
#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
static inline int
@@ -245,6 +247,11 @@ blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
return 0;
}
+static inline unsigned int blk_crypto_max_io_size(struct bio *bio)
+{
+ return UINT_MAX;
+}
+
#endif /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
#endif /* __LINUX_BLK_CRYPTO_INTERNAL_H */
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 70d704615be5..f4e210279cd3 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -9,6 +9,7 @@
#include <linux/blk-integrity.h>
#include <linux/part_stat.h>
#include <linux/blk-cgroup.h>
+#include <linux/blk-crypto.h>
#include <trace/events/block.h>
@@ -124,10 +125,12 @@ static struct bio *bio_submit_split(struct bio *bio, int split_sectors)
trace_block_split(split, bio->bi_iter.bi_sector);
WARN_ON_ONCE(bio_zone_write_plugging(bio));
submit_bio_noacct(bio);
- return split;
+
+ bio = split;
}
- return bio;
+ return blk_crypto_bio_prep(bio);
+
error:
bio->bi_status = errno_to_blk_status(split_sectors);
bio_endio(bio);
@@ -211,6 +214,8 @@ static inline unsigned get_max_io_size(struct bio *bio,
else
max_sectors = lim->max_sectors;
+ max_sectors = min(max_sectors, blk_crypto_max_io_size(bio));
+
if (boundary_sectors) {
max_sectors = min(max_sectors,
blk_boundary_sectors_left(bio->bi_iter.bi_sector,
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v3 7/7] block, crypto: Remove crypto_bio_split
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
` (5 preceding siblings ...)
2025-07-15 20:10 ` [PATCH v3 6/7] block: Rework splitting of encrypted bios Bart Van Assche
@ 2025-07-15 20:10 ` Bart Van Assche
2025-07-15 21:44 ` [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Eric Biggers
2025-07-16 16:12 ` Bart Van Assche
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 20:10 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Eric Biggers, Bart Van Assche
crypto_bio_split is no longer used since the bio splitting code has been
removed from the crypto fallback code. Hence, also remove crypto_bio_split.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-crypto-fallback.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index b54ad41e4192..b08e4d89d9a6 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -81,7 +81,6 @@ static struct blk_crypto_fallback_keyslot {
static struct blk_crypto_profile *blk_crypto_fallback_profile;
static struct workqueue_struct *blk_crypto_wq;
static mempool_t *blk_crypto_bounce_page_pool;
-static struct bio_set crypto_bio_split;
/*
* This is the key we set when evicting a keyslot. This *should* be the all 0's
@@ -528,16 +527,12 @@ static int blk_crypto_fallback_init(void)
get_random_bytes(blank_key, sizeof(blank_key));
- err = bioset_init(&crypto_bio_split, 64, 0, 0);
- if (err)
- goto out;
-
/* Dynamic allocation is needed because of lockdep_register_key(). */
blk_crypto_fallback_profile =
kzalloc(sizeof(*blk_crypto_fallback_profile), GFP_KERNEL);
if (!blk_crypto_fallback_profile) {
err = -ENOMEM;
- goto fail_free_bioset;
+ goto out;
}
err = blk_crypto_profile_init(blk_crypto_fallback_profile,
@@ -597,8 +592,6 @@ static int blk_crypto_fallback_init(void)
blk_crypto_profile_destroy(blk_crypto_fallback_profile);
fail_free_profile:
kfree(blk_crypto_fallback_profile);
-fail_free_bioset:
- bioset_exit(&crypto_bio_split);
out:
return err;
}
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
` (6 preceding siblings ...)
2025-07-15 20:10 ` [PATCH v3 7/7] block, crypto: Remove crypto_bio_split Bart Van Assche
@ 2025-07-15 21:44 ` Eric Biggers
2025-07-15 22:35 ` Bart Van Assche
` (2 more replies)
2025-07-16 16:12 ` Bart Van Assche
8 siblings, 3 replies; 21+ messages in thread
From: Eric Biggers @ 2025-07-15 21:44 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig
On Tue, Jul 15, 2025 at 01:10:48PM -0700, Bart Van Assche wrote:
> Changes compared to v2:
> - Added a patch that optimizes blk_crypto_max_io_size().
> - Added three patches that change calling conventions in the crypto fallback
> code.
> - Added a patch to remove crypto_bio_split.
> - Moved the blk_crypto_max_io_size() call into get_max_io_size().
But this doesn't address the issue we're discussing where this patchset
makes data be silently left unencrypted on some block devices.
If it's really only a "few random drivers that almost no one cares
about", then it's possible we can drop the blk-crypto-fallback support
for them anyway, as Christoph is suggesting.
However, it needs to be done safely. That means at least making
blk_crypto_config_supported() continue to return false when called on a
block_device that doesn't support any form of blk-crypto (either native
or fallback), and likewise making blk_crypto_start_using_key() continue
to return an error. Ideally, the actual I/O would also continue to fail
if it's attempted anyway.
Overall, it's a bit frustrating seeing these patches go by that propose
to silently leave data unencrypted when upper layers requested that it
be encrypted. IMO this is actually a point against handling encryption
in the block layer... The whole point of storage encryption (whether
fscrypt, dm-crypt, dm-inlinecrypt, or something else) is that the data
is actually encrypted. But if the actual encryption is done using code
whose developers / maintainers don't really consider encryption to be a
priority, that's not a great place to be.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-15 21:44 ` [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Eric Biggers
@ 2025-07-15 22:35 ` Bart Van Assche
2025-07-16 1:14 ` Eric Biggers
2025-07-16 11:25 ` Christoph Hellwig
2025-07-17 4:43 ` Christoph Hellwig
2 siblings, 1 reply; 21+ messages in thread
From: Bart Van Assche @ 2025-07-15 22:35 UTC (permalink / raw)
To: Eric Biggers; +Cc: Jens Axboe, linux-block, Christoph Hellwig
On 7/15/25 2:44 PM, Eric Biggers wrote:
> But if the actual encryption is done using code
> whose developers / maintainers don't really consider encryption to be a
> priority, that's not a great place to be.
Why does the crypto fallback code clone a bio instead of encrypting /
decrypting in place? If encryption and decryption would happen in place,
the crypto fallback code wouldn't have to be called from the block layer
code but instead could be called from the block driver(s) that need
encryption fallback support.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-15 22:35 ` Bart Van Assche
@ 2025-07-16 1:14 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-07-16 1:14 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Jens Axboe, linux-block, Christoph Hellwig
On Tue, Jul 15, 2025 at 03:35:19PM -0700, Bart Van Assche wrote:
> On 7/15/25 2:44 PM, Eric Biggers wrote:
> > But if the actual encryption is done using code
> > whose developers / maintainers don't really consider encryption to be a
> > priority, that's not a great place to be.
>
> Why does the crypto fallback code clone a bio instead of encrypting /
> decrypting in place?
Decryption is already in-place. Encryption is not, and unfortunately it
can't be since the source data must not be changed. If you encrypt
in-place, then the data would get corrupted every time it's written.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-15 21:44 ` [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Eric Biggers
2025-07-15 22:35 ` Bart Van Assche
@ 2025-07-16 11:25 ` Christoph Hellwig
2025-07-17 4:43 ` Christoph Hellwig
2 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-07-16 11:25 UTC (permalink / raw)
To: Eric Biggers; +Cc: Bart Van Assche, Jens Axboe, linux-block, Christoph Hellwig
On Tue, Jul 15, 2025 at 09:44:56PM +0000, Eric Biggers wrote:
> However, it needs to be done safely. That means at least making
> blk_crypto_config_supported() continue to return false when called on a
> block_device that doesn't support any form of blk-crypto (either native
> or fallback), and likewise making blk_crypto_start_using_key() continue
> to return an error. Ideally, the actual I/O would also continue to fail
> if it's attempted anyway.
Yes, both are absolutely required.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed()
2025-07-15 20:10 ` [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
@ 2025-07-16 11:49 ` John Garry
2025-07-16 17:20 ` Bart Van Assche
0 siblings, 1 reply; 21+ messages in thread
From: John Garry @ 2025-07-16 11:49 UTC (permalink / raw)
To: Bart Van Assche, Jens Axboe
Cc: linux-block, Christoph Hellwig, Eric Biggers, Eric Biggers
On 15/07/2025 21:10, Bart Van Assche wrote:
> Remove the assumption that bv_len is a multiple of 512 bytes since this is
> not guaranteed by the block layer. This assumption may cause this function
> to return a smaller value than it should. This is harmless from a
> correctness point of view but may result in suboptimal performance.
>
> Note: unsigned int is sufficient for num_bytes since bio_for_each_segment()
> yields at most one page per iteration and since PAGE_SIZE * BIO_MAX_VECS
> fits in an unsigned int.
>
> Suggested-by: John Garry <john.g.garry@oracle.com>
> Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
is this really a fix?
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: John Garry <john.g.garry@oracle.com>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
` (7 preceding siblings ...)
2025-07-15 21:44 ` [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Eric Biggers
@ 2025-07-16 16:12 ` Bart Van Assche
8 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-16 16:12 UTC (permalink / raw)
To: Eric Biggers, Christoph Hellwig; +Cc: linux-block, Jens Axboe
On 7/15/25 1:10 PM, Bart Van Assche wrote:
> [ ... ]
(replying to my own email)
If nobody objects I will integrate the patch below in patch 6/7 from
this series. This patch addresses all concerns about this patch series
that have been formulated so far and that I'm aware of, including
restoring inline encryption support for all block devices. This is
realized by moving the blk_crypto_bio_prep() call from
bio_submit_split() back to where it was originally, namely in
blk_crypto_bio_prep(). Because of this change it is necessary to restore
bio splitting in blk_crypto_fallback_encrypt_bio(). Instead of restoring
the custom bio splitting code, call bio_split_to_limits(). This call
takes the inline encryption fallback bio limits into account because
patch 6/7 modifies get_max_io_size().
Thanks,
Bart.
diff --git a/block/blk-core.c b/block/blk-core.c
index 5af5f8c3cd06..2c3c8576aa9b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -626,6 +626,10 @@ static void __submit_bio(struct bio *bio)
/* If plug is not used, add new plug here to cache nsecs time. */
struct blk_plug plug;
+ bio = blk_crypto_bio_prep(bio);
+ if (unlikely(!bio))
+ return;
+
blk_start_plug(&plug);
if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index b08e4d89d9a6..3b1c277e4375 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -262,12 +262,8 @@ static struct bio
*blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
unsigned int i, j;
blk_status_t blk_st;
- /* Verify that bio splitting has occurred. */
- if (WARN_ON_ONCE(bio_sectors(src_bio) >
- blk_crypto_max_io_size(src_bio))) {
- src_bio->bi_status = BLK_STS_IOERR;
- return NULL;
- }
+ /* Split the bio if it's too big for single page bvec */
+ src_bio = bio_split_to_limits(src_bio);
bc = src_bio->bi_crypt_context;
data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index f4e210279cd3..990f2847c820 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -125,11 +125,10 @@ static struct bio *bio_submit_split(struct bio
*bio, int split_sectors)
trace_block_split(split, bio->bi_iter.bi_sector);
WARN_ON_ONCE(bio_zone_write_plugging(bio));
submit_bio_noacct(bio);
-
- bio = split;
+ return split;
}
- return blk_crypto_bio_prep(bio);
+ return bio;
error:
bio->bi_status = errno_to_blk_status(split_sectors);
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed()
2025-07-16 11:49 ` John Garry
@ 2025-07-16 17:20 ` Bart Van Assche
0 siblings, 0 replies; 21+ messages in thread
From: Bart Van Assche @ 2025-07-16 17:20 UTC (permalink / raw)
To: John Garry, Jens Axboe
Cc: linux-block, Christoph Hellwig, Eric Biggers, Eric Biggers
On 7/16/25 4:49 AM, John Garry wrote:
> On 15/07/2025 21:10, Bart Van Assche wrote:
>> Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
>
> is this really a fix?
I will drop the Fixes: tag before I repost this patch series.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-15 21:44 ` [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Eric Biggers
2025-07-15 22:35 ` Bart Van Assche
2025-07-16 11:25 ` Christoph Hellwig
@ 2025-07-17 4:43 ` Christoph Hellwig
2025-07-17 17:58 ` Bart Van Assche
2025-07-18 15:37 ` Bart Van Assche
2 siblings, 2 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-07-17 4:43 UTC (permalink / raw)
To: Eric Biggers; +Cc: Bart Van Assche, Jens Axboe, linux-block, Christoph Hellwig
On Tue, Jul 15, 2025 at 09:44:56PM +0000, Eric Biggers wrote:
> Overall, it's a bit frustrating seeing these patches go by that propose
> to silently leave data unencrypted when upper layers requested that it
> be encrypted. IMO this is actually a point against handling encryption
> in the block layer... The whole point of storage encryption (whether
> fscrypt, dm-crypt, dm-inlinecrypt, or something else) is that the data
> is actually encrypted. But if the actual encryption is done using code
> whose developers / maintainers don't really consider encryption to be a
> priority, that's not a great place to be.
Getting back to this. While the ton is a bit snarky, it brings up a good
point. Relying on the block layer to ensure that data is always
encrypted seems like a bad idea, given that is really not what the block
layer is about, and definitively not high on the mind of anyone touching
the code. So I would not want to rely on the block layer developers to
ensure that data is encrypted properly through APIs not one believes part
that mission.
So I think you'd indeed be much better off not handling the (non-inline)
incryption in the block layer.
Doing that in fact sounds pretty easy - instead of calling the
blk-crypto-fallback code from inside the block layer, call it from the
callers instead of submit_bio when inline encryption is not actually
supported, e.g.
if (!blk_crypto_config_supported(bdev, &crypto_cfg))
blk_crypto_fallback_submit_bio(bio);
else
submit_bio(bio);
combined with checks in the low-level block code that we never get a
crypto context into the low-level submission into ->submit_bio or
->queue_rq when not supported.
That approach not only is much easier to verify for correct encryption
operation, but also makes things like bio splitting and the required
memory allocation for it less fragile.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-17 4:43 ` Christoph Hellwig
@ 2025-07-17 17:58 ` Bart Van Assche
2025-07-18 8:17 ` Christoph Hellwig
2025-07-18 15:37 ` Bart Van Assche
1 sibling, 1 reply; 21+ messages in thread
From: Bart Van Assche @ 2025-07-17 17:58 UTC (permalink / raw)
To: Christoph Hellwig, Eric Biggers; +Cc: Jens Axboe, linux-block
On 7/16/25 9:43 PM, Christoph Hellwig wrote:
> Getting back to this. While the ton is a bit snarky, it brings up a good
> point. Relying on the block layer to ensure that data is always
> encrypted seems like a bad idea, given that is really not what the block
> layer is about, and definitively not high on the mind of anyone touching
> the code. So I would not want to rely on the block layer developers to
> ensure that data is encrypted properly through APIs not one believes part
> that mission.
>
> So I think you'd indeed be much better off not handling the (non-inline)
> incryption in the block layer.
>
> Doing that in fact sounds pretty easy - instead of calling the
> blk-crypto-fallback code from inside the block layer, call it from the
> callers instead of submit_bio when inline encryption is not actually
> supported, e.g.
>
> if (!blk_crypto_config_supported(bdev, &crypto_cfg))
> blk_crypto_fallback_submit_bio(bio);
> else
> submit_bio(bio);
>
> combined with checks in the low-level block code that we never get a
> crypto context into the low-level submission into ->submit_bio or
> ->queue_rq when not supported.
>
> That approach not only is much easier to verify for correct encryption
> operation, but also makes things like bio splitting and the required
> memory allocation for it less fragile.
Is there agreement that this work falls outside the scope of my patch
series that fixes the write errors encountered with the combination of
inline encryption and zoned storage?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-17 17:58 ` Bart Van Assche
@ 2025-07-18 8:17 ` Christoph Hellwig
0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-07-18 8:17 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Christoph Hellwig, Eric Biggers, Jens Axboe, linux-block
On Thu, Jul 17, 2025 at 10:58:09AM -0700, Bart Van Assche wrote:
> Is there agreement that this work falls outside the scope of my patch
> series that fixes the write errors encountered with the combination of
> inline encryption and zoned storage?
I think it fits exaxctly in the "scope" as a fscrypt maintainer and
author of the blk-crypto brought up issues with the current approach.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-17 4:43 ` Christoph Hellwig
2025-07-17 17:58 ` Bart Van Assche
@ 2025-07-18 15:37 ` Bart Van Assche
2025-07-18 16:50 ` Eric Biggers
1 sibling, 1 reply; 21+ messages in thread
From: Bart Van Assche @ 2025-07-18 15:37 UTC (permalink / raw)
To: Christoph Hellwig, Eric Biggers; +Cc: Jens Axboe, linux-block
On 7/16/25 9:43 PM, Christoph Hellwig wrote:
> Getting back to this. While the ton is a bit snarky, it brings up a good
> point. Relying on the block layer to ensure that data is always
> encrypted seems like a bad idea, given that is really not what the block
> layer is about, and definitively not high on the mind of anyone touching
> the code. So I would not want to rely on the block layer developers to
> ensure that data is encrypted properly through APIs not one believes part
> that mission.
>
> So I think you'd indeed be much better off not handling the (non-inline)
> incryption in the block layer.
>
> Doing that in fact sounds pretty easy - instead of calling the
> blk-crypto-fallback code from inside the block layer, call it from the
> callers instead of submit_bio when inline encryption is not actually
> supported, e.g.
>
> if (!blk_crypto_config_supported(bdev, &crypto_cfg))
> blk_crypto_fallback_submit_bio(bio);
> else
> submit_bio(bio);
>
> combined with checks in the low-level block code that we never get a
> crypto context into the low-level submission into ->submit_bio or
> ->queue_rq when not supported.
>
> That approach not only is much easier to verify for correct encryption
> operation, but also makes things like bio splitting and the required
> memory allocation for it less fragile.
Has it ever been considered to merge the inline encryption code into
dm-crypt or convert the inline encryption fallback code into a new dm
driver? If user space code would insert a dm device between filesystems
and block devices if hardware encryption is not supported that would
have the following advantages:
* No filesystem implementations would have to be modified.
* It would make it easier to deal with bio splitting since dm drivers
can set stacking limits in their .io_hints() callback.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-18 15:37 ` Bart Van Assche
@ 2025-07-18 16:50 ` Eric Biggers
2025-07-21 6:01 ` Christoph Hellwig
0 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2025-07-18 16:50 UTC (permalink / raw)
To: Bart Van Assche; +Cc: Christoph Hellwig, Jens Axboe, linux-block
On Fri, Jul 18, 2025 at 08:37:04AM -0700, Bart Van Assche wrote:
> On 7/16/25 9:43 PM, Christoph Hellwig wrote:
> > Getting back to this. While the ton is a bit snarky, it brings up a good
> > point. Relying on the block layer to ensure that data is always
> > encrypted seems like a bad idea, given that is really not what the block
> > layer is about, and definitively not high on the mind of anyone touching
> > the code. So I would not want to rely on the block layer developers to
> > ensure that data is encrypted properly through APIs not one believes part
> > that mission.
> >
> > So I think you'd indeed be much better off not handling the (non-inline)
> > incryption in the block layer.
> >
> > Doing that in fact sounds pretty easy - instead of calling the
> > blk-crypto-fallback code from inside the block layer, call it from the
> > callers instead of submit_bio when inline encryption is not actually
> > supported, e.g.
> >
> > if (!blk_crypto_config_supported(bdev, &crypto_cfg))
> > blk_crypto_fallback_submit_bio(bio);
> > else
> > submit_bio(bio);
> >
> > combined with checks in the low-level block code that we never get a
> > crypto context into the low-level submission into ->submit_bio or
> > ->queue_rq when not supported.
> >
> > That approach not only is much easier to verify for correct encryption
> > operation, but also makes things like bio splitting and the required
> > memory allocation for it less fragile.
>
> Has it ever been considered to merge the inline encryption code into
> dm-crypt or convert the inline encryption fallback code into a new dm
> driver? If user space code would insert a dm device between filesystems
> and block devices if hardware encryption is not supported that would
> have the following advantages:
> * No filesystem implementations would have to be modified.
> * It would make it easier to deal with bio splitting since dm drivers
> can set stacking limits in their .io_hints() callback.
Yes, this was considered back when blk-crypto-fallback was being added.
But it is useful to support blk-crypto unconditionally without userspace
having to set up a dm device:
- It allows testing the inline encryption code paths in ext4 and f2fs
with xfstests on any system, just by adding the inlinecrypt mount
option. See e.g.
https://lore.kernel.org/linux-block/20191031205045.GG16197@mit.edu/
and
https://lore.kernel.org/linux-block/20200515170059.GA1009@sol.localdomain/
- It allows upper layers to just use blk-crypto instead of having both
blk-crypto and non-blk-crypto code paths. While I've been late on
converting fscrypt to actually rely on this, it still might be a good
idea, especially as we now need to revisit the code for reasons like
large folio support. (And Christoph seems to support this too -- see
https://lore.kernel.org/linux-block/20250715062112.GC18672@lst.de/)
But, as suggested at
https://lore.kernel.org/linux-block/20250717044342.GA26995@lst.de/ it
should also be okay to reorganize things so that the regular
submit_bio() does not support the fallback, and upper layers have to
call a different function blk_crypto_fallback_submit_bio() if they want
the fallback. I don't think that would help with the splitting issue
directly, but perhaps we could make the filesystems just not submit bios
that would need splitting by blk-crypto-fallback, which would solve the
issue.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v3 0/7] Fix bio splitting by the crypto fallback code
2025-07-18 16:50 ` Eric Biggers
@ 2025-07-21 6:01 ` Christoph Hellwig
0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-07-21 6:01 UTC (permalink / raw)
To: Eric Biggers; +Cc: Bart Van Assche, Christoph Hellwig, Jens Axboe, linux-block
On Fri, Jul 18, 2025 at 09:50:24AM -0700, Eric Biggers wrote:
> But, as suggested at
> https://lore.kernel.org/linux-block/20250717044342.GA26995@lst.de/ it
> should also be okay to reorganize things so that the regular
> submit_bio() does not support the fallback, and upper layers have to
> call a different function blk_crypto_fallback_submit_bio() if they want
> the fallback. I don't think that would help with the splitting issue
> directly,
It actually does. Splitting before submit_bio will automatically
get the ordering right. It is something done by a lot of file systems
already and trivially done.
> but perhaps we could make the filesystems just not submit bios
> that would need splitting by blk-crypto-fallback, which would solve the
> issue.
I think that distinction is a bit fuzzy. When stacking the blk-crypto
fallback helpers above submit_bio, the split is formally done by the
file systems, just using generic library helpers. As modern file systems
usually build bios to the full possibly size and then split them based
on limits I suspect that scheme is also the best here, but there might
be exceptions where just looking at the limits and not building the
bio bigger might be suitable, but I suspect they'd be the unusual
case.
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-07-21 6:01 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-15 20:10 [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 1/7] block: Improve blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
2025-07-16 11:49 ` John Garry
2025-07-16 17:20 ` Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 2/7] block: Split blk_crypto_fallback_split_bio_if_needed() Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 3/7] block: Modify the blk_crypto_bio_prep() calling convention Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 4/7] block: Modify the blk_crypto_fallback_bio_prep() " Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 5/7] block: Change the blk_crypto_fallback_encrypt_bio() " Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 6/7] block: Rework splitting of encrypted bios Bart Van Assche
2025-07-15 20:10 ` [PATCH v3 7/7] block, crypto: Remove crypto_bio_split Bart Van Assche
2025-07-15 21:44 ` [PATCH v3 0/7] Fix bio splitting by the crypto fallback code Eric Biggers
2025-07-15 22:35 ` Bart Van Assche
2025-07-16 1:14 ` Eric Biggers
2025-07-16 11:25 ` Christoph Hellwig
2025-07-17 4:43 ` Christoph Hellwig
2025-07-17 17:58 ` Bart Van Assche
2025-07-18 8:17 ` Christoph Hellwig
2025-07-18 15:37 ` Bart Van Assche
2025-07-18 16:50 ` Eric Biggers
2025-07-21 6:01 ` Christoph Hellwig
2025-07-16 16:12 ` Bart Van Assche
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).