* [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep
2025-12-10 15:23 move blk-crypto-fallback to sit above the block layer v2 Christoph Hellwig
@ 2025-12-10 15:23 ` Christoph Hellwig
2025-12-13 0:48 ` Eric Biggers
0 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-10 15:23 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Restructure blk_crypto_fallback_bio_prep so that it always submits the
encrypted bio instead of passing it back to the caller, which allows
to simplify the calling conventions for blk_crypto_fallback_bio_prep and
blk_crypto_bio_prep so that they never have to return a bio, and can
use a true return value to indicate that the caller should submit the
bio, and false that the blk-crypto code consumed it.
The submission is handled by the on-stack bio list in the current
task_struct by the block layer and does not cause additional stack
usage or major overhead. It also prepares for the following optimization
and fixes for the blk-crypto fallback write path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 2 +-
block/blk-crypto-fallback.c | 69 +++++++++++++++++--------------------
block/blk-crypto-internal.h | 19 ++++------
block/blk-crypto.c | 53 ++++++++++++++--------------
4 files changed, 66 insertions(+), 77 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 8387fe50ea15..f87e5f1a101f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -628,7 +628,7 @@ static void __submit_bio(struct bio *bio)
/* If plug is not used, add new plug here to cache nsecs time. */
struct blk_plug plug;
- if (unlikely(!blk_crypto_bio_prep(&bio)))
+ if (unlikely(!blk_crypto_bio_prep(bio)))
return;
blk_start_plug(&plug);
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 86b27f96051a..3ac06c722cac 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -250,14 +250,14 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
/*
* The crypto API fallback's encryption routine.
- * Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
- * and replace *bio_ptr with the bounce bio. May split input bio if it's too
- * large. Returns true on success. Returns false and sets bio->bi_status on
- * error.
+ *
+ * Allocate one or more bios for encryption, encrypt the input bio using the
+ * crypto API, and submit the encrypted bios. Sets bio->bi_status and
+ * completes the source bio on error
*/
-static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
+static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
{
- struct bio *src_bio, *enc_bio;
+ struct bio *enc_bio;
struct bio_crypt_ctx *bc;
struct blk_crypto_keyslot *slot;
int data_unit_size;
@@ -267,14 +267,12 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
struct scatterlist src, dst;
union blk_crypto_iv iv;
unsigned int i, j;
- bool ret = false;
blk_status_t blk_st;
/* Split the bio if it's too big for single page bvec */
- if (!blk_crypto_fallback_split_bio_if_needed(bio_ptr))
- return false;
+ if (!blk_crypto_fallback_split_bio_if_needed(&src_bio))
+ goto out_endio;
- src_bio = *bio_ptr;
bc = src_bio->bi_crypt_context;
data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
@@ -282,7 +280,7 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
enc_bio = blk_crypto_fallback_clone_bio(src_bio);
if (!enc_bio) {
src_bio->bi_status = BLK_STS_RESOURCE;
- return false;
+ goto out_endio;
}
/*
@@ -345,25 +343,23 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
enc_bio->bi_private = src_bio;
enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
- *bio_ptr = enc_bio;
- ret = true;
-
- enc_bio = NULL;
- goto out_free_ciph_req;
+ skcipher_request_free(ciph_req);
+ blk_crypto_put_keyslot(slot);
+ submit_bio(enc_bio);
+ return;
out_free_bounce_pages:
while (i > 0)
mempool_free(enc_bio->bi_io_vec[--i].bv_page,
blk_crypto_bounce_page_pool);
-out_free_ciph_req:
skcipher_request_free(ciph_req);
out_release_keyslot:
blk_crypto_put_keyslot(slot);
out_put_enc_bio:
- if (enc_bio)
- bio_uninit(enc_bio);
+ bio_uninit(enc_bio);
kfree(enc_bio);
- return ret;
+out_endio:
+ bio_endio(src_bio);
}
/*
@@ -466,33 +462,30 @@ static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
/**
* blk_crypto_fallback_bio_prep - Prepare a bio to use fallback en/decryption
+ * @bio: bio to prepare
*
- * @bio_ptr: pointer to the bio to prepare
- *
- * If bio is doing a WRITE operation, this splits the bio into two parts if it's
- * too big (see blk_crypto_fallback_split_bio_if_needed()). It then allocates a
- * bounce bio for the first part, encrypts it, and updates bio_ptr to point to
- * the bounce bio.
+ * If bio is doing a WRITE operation, allocate one or more bios to contain the
+ * encrypted payload and submit them.
*
- * For a READ operation, we mark the bio for decryption by using bi_private and
+ * For a READ operation, mark the bio for decryption by using bi_private and
* bi_end_io.
*
- * In either case, this function will make the bio look like a regular bio (i.e.
- * as if no encryption context was ever specified) for the purposes of the rest
- * of the stack except for blk-integrity (blk-integrity and blk-crypto are not
- * currently supported together).
+ * In either case, this function will make the submitted bio look like a regular
+ * bio (i.e. as if no encryption context was ever specified) for the purposes of
+ * the rest of the stack except for blk-integrity (blk-integrity and blk-crypto
+ * are not currently supported together).
*
- * Return: true on success. Sets bio->bi_status and returns false on error.
+ * Return: true if @bio should be submitted to the driver by the caller, else
+ * false. Sets bio->bi_status, calls bio_endio and returns false on error.
*/
-bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+bool blk_crypto_fallback_bio_prep(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
struct bio_crypt_ctx *bc = bio->bi_crypt_context;
struct bio_fallback_crypt_ctx *f_ctx;
if (WARN_ON_ONCE(!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode])) {
/* User didn't call blk_crypto_start_using_key() first */
- bio->bi_status = BLK_STS_IOERR;
+ bio_io_error(bio);
return false;
}
@@ -502,8 +495,10 @@ bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
return false;
}
- if (bio_data_dir(bio) == WRITE)
- return blk_crypto_fallback_encrypt_bio(bio_ptr);
+ if (bio_data_dir(bio) == WRITE) {
+ blk_crypto_fallback_encrypt_bio(bio);
+ return false;
+ }
/*
* bio READ case: Set up a f_ctx in the bio's bi_private and set the
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index ccf6dff6ff6b..d65023120341 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -165,11 +165,11 @@ static inline void bio_crypt_do_front_merge(struct request *rq,
#endif
}
-bool __blk_crypto_bio_prep(struct bio **bio_ptr);
-static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
+bool __blk_crypto_bio_prep(struct bio *bio);
+static inline bool blk_crypto_bio_prep(struct bio *bio)
{
- if (bio_has_crypt_ctx(*bio_ptr))
- return __blk_crypto_bio_prep(bio_ptr);
+ if (bio_has_crypt_ctx(bio))
+ return __blk_crypto_bio_prep(bio);
return true;
}
@@ -215,12 +215,12 @@ static inline int blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
return 0;
}
+bool blk_crypto_fallback_bio_prep(struct bio *bio);
+
#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num);
-bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr);
-
int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
@@ -232,13 +232,6 @@ blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
return -ENOPKG;
}
-static inline bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
-{
- pr_warn_once("crypto API fallback disabled; failing request.\n");
- (*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
- return false;
-}
-
static inline int
blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
{
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 3e7bf1974cbd..69e869d1c9bd 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -260,54 +260,55 @@ void __blk_crypto_free_request(struct request *rq)
/**
* __blk_crypto_bio_prep - Prepare bio for inline encryption
- *
- * @bio_ptr: pointer to original bio pointer
+ * @bio: bio to prepare
*
* If the bio crypt context provided for the bio is supported by the underlying
* device's inline encryption hardware, do nothing.
*
* Otherwise, try to perform en/decryption for this bio by falling back to the
- * kernel crypto API. When the crypto API fallback is used for encryption,
- * blk-crypto may choose to split the bio into 2 - the first one that will
- * continue to be processed and the second one that will be resubmitted via
- * submit_bio_noacct. A bounce bio will be allocated to encrypt the contents
- * of the aforementioned "first one", and *bio_ptr will be updated to this
- * bounce bio.
+ * kernel crypto API. For encryption this means submitting newly allocated
+ * bios for the encrypted payload while keeping back the source bio until they
+ * complete, while for reads the decryption happens in-place by a hooked in
+ * completion handler.
*
* Caller must ensure bio has bio_crypt_ctx.
*
- * Return: true on success; false on error (and bio->bi_status will be set
- * appropriately, and bio_endio() will have been called so bio
- * submission should abort).
+ * Return: true if @bio should be submitted to the driver by the caller, else
+ * false. Sets bio->bi_status, calls bio_endio and returns false on error.
*/
-bool __blk_crypto_bio_prep(struct bio **bio_ptr)
+bool __blk_crypto_bio_prep(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key;
+ struct block_device *bdev = bio->bi_bdev;
/* Error if bio has no data. */
if (WARN_ON_ONCE(!bio_has_data(bio))) {
- bio->bi_status = BLK_STS_IOERR;
- goto fail;
+ bio_io_error(bio);
+ return false;
}
if (!bio_crypt_check_alignment(bio)) {
bio->bi_status = BLK_STS_INVAL;
- goto fail;
+ bio_endio(bio);
+ return false;
}
/*
- * Success if device supports the encryption context, or if we succeeded
- * in falling back to the crypto API.
+ * If the device does not natively support the encryption context, try to use
+ * the fallback if available.
*/
- if (blk_crypto_config_supported_natively(bio->bi_bdev,
- &bc_key->crypto_cfg))
- return true;
- if (blk_crypto_fallback_bio_prep(bio_ptr))
- return true;
-fail:
- bio_endio(*bio_ptr);
- return false;
+ if (!blk_crypto_config_supported_natively(bdev, &bc_key->crypto_cfg)) {
+ if (!IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK)) {
+ pr_warn_once("%pg: crypto API fallback disabled; failing request.\n",
+ bdev);
+ bio->bi_status = BLK_STS_NOTSUPP;
+ bio_endio(bio);
+ return false;
+ }
+ return blk_crypto_fallback_bio_prep(bio);
+ }
+
+ return true;
}
int __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep
2025-12-10 15:23 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
@ 2025-12-13 0:48 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-12-13 0:48 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Wed, Dec 10, 2025 at 04:23:33PM +0100, Christoph Hellwig wrote:
> @@ -502,8 +495,10 @@ bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
> if (!__blk_crypto_cfg_supported(blk_crypto_fallback_profile,
> &bc->bc_key->crypto_cfg)) {
> bio->bi_status = BLK_STS_NOTSUPP;
> return false;
> }
The above is missing a call to bio_endio().
> + * In either case, this function will make the submitted bio look like a regular
> + * bio (i.e. as if no encryption context was ever specified) for the purposes of
> + * the rest of the stack except for blk-integrity (blk-integrity and blk-crypto
> + * are not currently supported together).
Maybe "submitted bio" => "submitted bio(s)", considering that there can
be multiple. Or put this information in the preceding paragraphs that
describe the WRITE and READ cases.
Otherwise this patch looks good. I'm not 100% sure the split case still
works correctly, but it's not really important because the next patch in
the series rewrites it anyway.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* move blk-crypto-fallback to sit above the block layer v3
@ 2025-12-17 6:06 Christoph Hellwig
2025-12-17 6:06 ` [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt Christoph Hellwig
` (8 more replies)
0 siblings, 9 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Hi all,
in the past we had various discussions that doing the blk-crypto fallback
below the block layer causes all kinds of problems due to very late
splitting and communicating up features.
This series turns that call chain upside down by requiring the caller to
call into blk-crypto using a new submit_bio wrapper instead so that only
hardware encryption bios are passed through the block layer as such.
While doings this I also noticed that the existing blk-crypto-fallback
code does various unprotected memory allocations which this converts to
mempools, or from loops of mempool allocations to the new safe batch
mempool allocator.
There might be future avenues for optimization by using high order
folio allocations that match the file systems preferred folio size,
but for that'd probably want a batch folio allocator first, in addition
to deferring it to avoid scope creep.
TODO:
- what to pass to mempool_alloc bulk (patch)
- bio_has_crypt_ctx wrapper??
Changes since v2:
- drop the block split refactoring that was broken
- add a bio_crypt_ctx() helper
- add a missing bio_endio in blk_crypto_fallback_bio_prep
- fix page freeing in the error path of
__blk_crypto_fallback_encrypt_bio
- improve a few comment and commit messages
Changes since v1:
- drop the mempool bulk allocator that was merged upstream
- keep call bio_crypt_check_alignment for the hardware crypto case
- rework the way bios are submitted earlier and reorder the series
a bit to suit this
- use struct initializers for struct fscrypt_zero_done in
fscrypt_zeroout_range_inline_crypt
- use cmpxchg to make the bi_status update in
blk_crypto_fallback_encrypt_endio safe
- rename the bio_set matching it's new purpose
- remove usage of DECLARE_CRYPTO_WAIT()
- use consistent GFP flags / scope
- optimize data unit alignment checking
- update Documentation/block/inline-encryption.rst for the new
blk_crypto_submit_bio API
- optimize alignment checking and ensure it still happens for
hardware encryption
- reorder the series a bit
- improve various comments
Diffstat:
Documentation/block/inline-encryption.rst | 6
block/blk-core.c | 10
block/blk-crypto-fallback.c | 428 ++++++++++++++----------------
block/blk-crypto-internal.h | 30 --
block/blk-crypto.c | 78 +----
block/blk-merge.c | 9
fs/buffer.c | 3
fs/crypto/bio.c | 91 +++---
fs/ext4/page-io.c | 3
fs/ext4/readpage.c | 9
fs/f2fs/data.c | 4
fs/f2fs/file.c | 3
fs/iomap/direct-io.c | 3
include/linux/blk-crypto.h | 32 ++
14 files changed, 369 insertions(+), 340 deletions(-)
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-17 6:06 ` [PATCH 2/9] fscrypt: keep multiple bios in flight in fscrypt_zeroout_range_inline_crypt Christoph Hellwig
` (7 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
While the pblk argument to fscrypt_zeroout_range_inline_crypt is
declared as a sector_t it actually is interpreted as a logical block
size unit, which is highly unusual. Switch to passing the 512 byte
units that sector_t is defined for.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
---
fs/crypto/bio.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index 5f5599020e94..68b0424d879a 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -48,7 +48,7 @@ bool fscrypt_decrypt_bio(struct bio *bio)
EXPORT_SYMBOL(fscrypt_decrypt_bio);
static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
- pgoff_t lblk, sector_t pblk,
+ pgoff_t lblk, sector_t sector,
unsigned int len)
{
const unsigned int blockbits = inode->i_blkbits;
@@ -67,8 +67,7 @@ static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
if (num_pages == 0) {
fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS);
- bio->bi_iter.bi_sector =
- pblk << (blockbits - SECTOR_SHIFT);
+ bio->bi_iter.bi_sector = sector;
}
ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0);
if (WARN_ON_ONCE(ret != bytes_this_page)) {
@@ -78,7 +77,7 @@ static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
num_pages++;
len -= blocks_this_page;
lblk += blocks_this_page;
- pblk += blocks_this_page;
+ sector += (bytes_this_page >> SECTOR_SHIFT);
if (num_pages == BIO_MAX_VECS || !len ||
!fscrypt_mergeable_bio(bio, inode, lblk)) {
err = submit_bio_wait(bio);
@@ -132,7 +131,7 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
return 0;
if (fscrypt_inode_uses_inline_crypto(inode))
- return fscrypt_zeroout_range_inline_crypt(inode, lblk, pblk,
+ return fscrypt_zeroout_range_inline_crypt(inode, lblk, sector,
len);
BUILD_BUG_ON(ARRAY_SIZE(pages) > BIO_MAX_VECS);
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 2/9] fscrypt: keep multiple bios in flight in fscrypt_zeroout_range_inline_crypt
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
2025-12-17 6:06 ` [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-17 6:06 ` [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper Christoph Hellwig
` (6 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
This should slightly improve performance for large zeroing operations,
but more importantly prepares for blk-crypto refactoring that requires
all fscrypt users to call submit_bio directly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
---
fs/crypto/bio.c | 86 +++++++++++++++++++++++++++++++------------------
1 file changed, 54 insertions(+), 32 deletions(-)
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index 68b0424d879a..c2b3ca100f8d 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -47,49 +47,71 @@ bool fscrypt_decrypt_bio(struct bio *bio)
}
EXPORT_SYMBOL(fscrypt_decrypt_bio);
+struct fscrypt_zero_done {
+ atomic_t pending;
+ blk_status_t status;
+ struct completion done;
+};
+
+static void fscrypt_zeroout_range_done(struct fscrypt_zero_done *done)
+{
+ if (atomic_dec_and_test(&done->pending))
+ complete(&done->done);
+}
+
+static void fscrypt_zeroout_range_end_io(struct bio *bio)
+{
+ struct fscrypt_zero_done *done = bio->bi_private;
+
+ if (bio->bi_status)
+ cmpxchg(&done->status, 0, bio->bi_status);
+ fscrypt_zeroout_range_done(done);
+ bio_put(bio);
+}
+
static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
pgoff_t lblk, sector_t sector,
unsigned int len)
{
const unsigned int blockbits = inode->i_blkbits;
const unsigned int blocks_per_page = 1 << (PAGE_SHIFT - blockbits);
- struct bio *bio;
- int ret, err = 0;
- int num_pages = 0;
-
- /* This always succeeds since __GFP_DIRECT_RECLAIM is set. */
- bio = bio_alloc(inode->i_sb->s_bdev, BIO_MAX_VECS, REQ_OP_WRITE,
- GFP_NOFS);
+ struct fscrypt_zero_done done = {
+ .pending = ATOMIC_INIT(1),
+ .done = COMPLETION_INITIALIZER_ONSTACK(done.done),
+ };
while (len) {
- unsigned int blocks_this_page = min(len, blocks_per_page);
- unsigned int bytes_this_page = blocks_this_page << blockbits;
+ struct bio *bio;
+ unsigned int n;
- if (num_pages == 0) {
- fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS);
- bio->bi_iter.bi_sector = sector;
- }
- ret = bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0);
- if (WARN_ON_ONCE(ret != bytes_this_page)) {
- err = -EIO;
- goto out;
- }
- num_pages++;
- len -= blocks_this_page;
- lblk += blocks_this_page;
- sector += (bytes_this_page >> SECTOR_SHIFT);
- if (num_pages == BIO_MAX_VECS || !len ||
- !fscrypt_mergeable_bio(bio, inode, lblk)) {
- err = submit_bio_wait(bio);
- if (err)
- goto out;
- bio_reset(bio, inode->i_sb->s_bdev, REQ_OP_WRITE);
- num_pages = 0;
+ bio = bio_alloc(inode->i_sb->s_bdev, BIO_MAX_VECS, REQ_OP_WRITE,
+ GFP_NOFS);
+ bio->bi_iter.bi_sector = sector;
+ bio->bi_private = &done;
+ bio->bi_end_io = fscrypt_zeroout_range_end_io;
+ fscrypt_set_bio_crypt_ctx(bio, inode, lblk, GFP_NOFS);
+
+ for (n = 0; n < BIO_MAX_VECS; n++) {
+ unsigned int blocks_this_page =
+ min(len, blocks_per_page);
+ unsigned int bytes_this_page = blocks_this_page << blockbits;
+
+ __bio_add_page(bio, ZERO_PAGE(0), bytes_this_page, 0);
+ len -= blocks_this_page;
+ lblk += blocks_this_page;
+ sector += (bytes_this_page >> SECTOR_SHIFT);
+ if (!len || !fscrypt_mergeable_bio(bio, inode, lblk))
+ break;
}
+
+ atomic_inc(&done.pending);
+ submit_bio(bio);
}
-out:
- bio_put(bio);
- return err;
+
+ fscrypt_zeroout_range_done(&done);
+
+ wait_for_completion(&done.done);
+ return blk_status_to_errno(done.status);
}
/**
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
2025-12-17 6:06 ` [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt Christoph Hellwig
2025-12-17 6:06 ` [PATCH 2/9] fscrypt: keep multiple bios in flight in fscrypt_zeroout_range_inline_crypt Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-19 19:50 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
` (5 subsequent siblings)
8 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
This returns the bio_crypt_ctx if CONFIG_BLK_INLINE_ENCRYPTION is enabled
and a crypto context is attached to the bio, else NULL.
The use case is to allow safely dereferencing the context in common code
without needed #ifdef CONFIG_BLK_INLINE_ENCRYPTION.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
include/linux/blk-crypto.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index 58b0c5254a67..eb80df19be68 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -132,6 +132,11 @@ static inline bool bio_has_crypt_ctx(struct bio *bio)
return bio->bi_crypt_context;
}
+static inline struct bio_crypt_ctx *bio_crypt_ctx(struct bio *bio)
+{
+ return bio->bi_crypt_context;
+}
+
void bio_crypt_set_ctx(struct bio *bio, const struct blk_crypto_key *key,
const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
gfp_t gfp_mask);
@@ -169,6 +174,11 @@ static inline bool bio_has_crypt_ctx(struct bio *bio)
return false;
}
+static inline struct bio_crypt_ctx *bio_crypt_ctx(struct bio *bio)
+{
+ return NULL;
+}
+
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
int __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
` (2 preceding siblings ...)
2025-12-17 6:06 ` [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-19 19:50 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio Christoph Hellwig
` (4 subsequent siblings)
8 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Restructure blk_crypto_fallback_bio_prep so that it always submits the
encrypted bio instead of passing it back to the caller, which allows
to simplify the calling conventions for blk_crypto_fallback_bio_prep and
blk_crypto_bio_prep so that they never have to return a bio, and can
use a true return value to indicate that the caller should submit the
bio, and false that the blk-crypto code consumed it.
The submission is handled by the on-stack bio list in the current
task_struct by the block layer and does not cause additional stack
usage or major overhead. It also prepares for the following optimization
and fixes for the blk-crypto fallback write path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-core.c | 2 +-
block/blk-crypto-fallback.c | 70 +++++++++++++++++--------------------
block/blk-crypto-internal.h | 19 ++++------
block/blk-crypto.c | 53 ++++++++++++++--------------
4 files changed, 67 insertions(+), 77 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 8387fe50ea15..f87e5f1a101f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -628,7 +628,7 @@ static void __submit_bio(struct bio *bio)
/* If plug is not used, add new plug here to cache nsecs time. */
struct blk_plug plug;
- if (unlikely(!blk_crypto_bio_prep(&bio)))
+ if (unlikely(!blk_crypto_bio_prep(bio)))
return;
blk_start_plug(&plug);
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 86b27f96051a..cc9e90be23b7 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -250,14 +250,14 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
/*
* The crypto API fallback's encryption routine.
- * Allocate a bounce bio for encryption, encrypt the input bio using crypto API,
- * and replace *bio_ptr with the bounce bio. May split input bio if it's too
- * large. Returns true on success. Returns false and sets bio->bi_status on
- * error.
+ *
+ * Allocate one or more bios for encryption, encrypt the input bio using the
+ * crypto API, and submit the encrypted bios. Sets bio->bi_status and
+ * completes the source bio on error
*/
-static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
+static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
{
- struct bio *src_bio, *enc_bio;
+ struct bio *enc_bio;
struct bio_crypt_ctx *bc;
struct blk_crypto_keyslot *slot;
int data_unit_size;
@@ -267,14 +267,12 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
struct scatterlist src, dst;
union blk_crypto_iv iv;
unsigned int i, j;
- bool ret = false;
blk_status_t blk_st;
/* Split the bio if it's too big for single page bvec */
- if (!blk_crypto_fallback_split_bio_if_needed(bio_ptr))
- return false;
+ if (!blk_crypto_fallback_split_bio_if_needed(&src_bio))
+ goto out_endio;
- src_bio = *bio_ptr;
bc = src_bio->bi_crypt_context;
data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
@@ -282,7 +280,7 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
enc_bio = blk_crypto_fallback_clone_bio(src_bio);
if (!enc_bio) {
src_bio->bi_status = BLK_STS_RESOURCE;
- return false;
+ goto out_endio;
}
/*
@@ -345,25 +343,23 @@ static bool blk_crypto_fallback_encrypt_bio(struct bio **bio_ptr)
enc_bio->bi_private = src_bio;
enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
- *bio_ptr = enc_bio;
- ret = true;
-
- enc_bio = NULL;
- goto out_free_ciph_req;
+ skcipher_request_free(ciph_req);
+ blk_crypto_put_keyslot(slot);
+ submit_bio(enc_bio);
+ return;
out_free_bounce_pages:
while (i > 0)
mempool_free(enc_bio->bi_io_vec[--i].bv_page,
blk_crypto_bounce_page_pool);
-out_free_ciph_req:
skcipher_request_free(ciph_req);
out_release_keyslot:
blk_crypto_put_keyslot(slot);
out_put_enc_bio:
- if (enc_bio)
- bio_uninit(enc_bio);
+ bio_uninit(enc_bio);
kfree(enc_bio);
- return ret;
+out_endio:
+ bio_endio(src_bio);
}
/*
@@ -466,44 +462,44 @@ static void blk_crypto_fallback_decrypt_endio(struct bio *bio)
/**
* blk_crypto_fallback_bio_prep - Prepare a bio to use fallback en/decryption
+ * @bio: bio to prepare
*
- * @bio_ptr: pointer to the bio to prepare
- *
- * If bio is doing a WRITE operation, this splits the bio into two parts if it's
- * too big (see blk_crypto_fallback_split_bio_if_needed()). It then allocates a
- * bounce bio for the first part, encrypts it, and updates bio_ptr to point to
- * the bounce bio.
+ * If bio is doing a WRITE operation, allocate one or more bios to contain the
+ * encrypted payload and submit them.
*
- * For a READ operation, we mark the bio for decryption by using bi_private and
+ * For a READ operation, mark the bio for decryption by using bi_private and
* bi_end_io.
*
- * In either case, this function will make the bio look like a regular bio (i.e.
- * as if no encryption context was ever specified) for the purposes of the rest
- * of the stack except for blk-integrity (blk-integrity and blk-crypto are not
- * currently supported together).
+ * In either case, this function will make the submitted bio(s) look like
+ * regular bios (i.e. as if no encryption context was ever specified) for the
+ * purposes of the rest of the stack except for blk-integrity (blk-integrity and
+ * blk-crypto are not currently supported together).
*
- * Return: true on success. Sets bio->bi_status and returns false on error.
+ * Return: true if @bio should be submitted to the driver by the caller, else
+ * false. Sets bio->bi_status, calls bio_endio and returns false on error.
*/
-bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
+bool blk_crypto_fallback_bio_prep(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
struct bio_crypt_ctx *bc = bio->bi_crypt_context;
struct bio_fallback_crypt_ctx *f_ctx;
if (WARN_ON_ONCE(!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode])) {
/* User didn't call blk_crypto_start_using_key() first */
- bio->bi_status = BLK_STS_IOERR;
+ bio_io_error(bio);
return false;
}
if (!__blk_crypto_cfg_supported(blk_crypto_fallback_profile,
&bc->bc_key->crypto_cfg)) {
bio->bi_status = BLK_STS_NOTSUPP;
+ bio_endio(bio);
return false;
}
- if (bio_data_dir(bio) == WRITE)
- return blk_crypto_fallback_encrypt_bio(bio_ptr);
+ if (bio_data_dir(bio) == WRITE) {
+ blk_crypto_fallback_encrypt_bio(bio);
+ return false;
+ }
/*
* bio READ case: Set up a f_ctx in the bio's bi_private and set the
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index ccf6dff6ff6b..d65023120341 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -165,11 +165,11 @@ static inline void bio_crypt_do_front_merge(struct request *rq,
#endif
}
-bool __blk_crypto_bio_prep(struct bio **bio_ptr);
-static inline bool blk_crypto_bio_prep(struct bio **bio_ptr)
+bool __blk_crypto_bio_prep(struct bio *bio);
+static inline bool blk_crypto_bio_prep(struct bio *bio)
{
- if (bio_has_crypt_ctx(*bio_ptr))
- return __blk_crypto_bio_prep(bio_ptr);
+ if (bio_has_crypt_ctx(bio))
+ return __blk_crypto_bio_prep(bio);
return true;
}
@@ -215,12 +215,12 @@ static inline int blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
return 0;
}
+bool blk_crypto_fallback_bio_prep(struct bio *bio);
+
#ifdef CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK
int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num);
-bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr);
-
int blk_crypto_fallback_evict_key(const struct blk_crypto_key *key);
#else /* CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK */
@@ -232,13 +232,6 @@ blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
return -ENOPKG;
}
-static inline bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
-{
- pr_warn_once("crypto API fallback disabled; failing request.\n");
- (*bio_ptr)->bi_status = BLK_STS_NOTSUPP;
- return false;
-}
-
static inline int
blk_crypto_fallback_evict_key(const struct blk_crypto_key *key)
{
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 3e7bf1974cbd..69e869d1c9bd 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -260,54 +260,55 @@ void __blk_crypto_free_request(struct request *rq)
/**
* __blk_crypto_bio_prep - Prepare bio for inline encryption
- *
- * @bio_ptr: pointer to original bio pointer
+ * @bio: bio to prepare
*
* If the bio crypt context provided for the bio is supported by the underlying
* device's inline encryption hardware, do nothing.
*
* Otherwise, try to perform en/decryption for this bio by falling back to the
- * kernel crypto API. When the crypto API fallback is used for encryption,
- * blk-crypto may choose to split the bio into 2 - the first one that will
- * continue to be processed and the second one that will be resubmitted via
- * submit_bio_noacct. A bounce bio will be allocated to encrypt the contents
- * of the aforementioned "first one", and *bio_ptr will be updated to this
- * bounce bio.
+ * kernel crypto API. For encryption this means submitting newly allocated
+ * bios for the encrypted payload while keeping back the source bio until they
+ * complete, while for reads the decryption happens in-place by a hooked in
+ * completion handler.
*
* Caller must ensure bio has bio_crypt_ctx.
*
- * Return: true on success; false on error (and bio->bi_status will be set
- * appropriately, and bio_endio() will have been called so bio
- * submission should abort).
+ * Return: true if @bio should be submitted to the driver by the caller, else
+ * false. Sets bio->bi_status, calls bio_endio and returns false on error.
*/
-bool __blk_crypto_bio_prep(struct bio **bio_ptr)
+bool __blk_crypto_bio_prep(struct bio *bio)
{
- struct bio *bio = *bio_ptr;
const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key;
+ struct block_device *bdev = bio->bi_bdev;
/* Error if bio has no data. */
if (WARN_ON_ONCE(!bio_has_data(bio))) {
- bio->bi_status = BLK_STS_IOERR;
- goto fail;
+ bio_io_error(bio);
+ return false;
}
if (!bio_crypt_check_alignment(bio)) {
bio->bi_status = BLK_STS_INVAL;
- goto fail;
+ bio_endio(bio);
+ return false;
}
/*
- * Success if device supports the encryption context, or if we succeeded
- * in falling back to the crypto API.
+ * If the device does not natively support the encryption context, try to use
+ * the fallback if available.
*/
- if (blk_crypto_config_supported_natively(bio->bi_bdev,
- &bc_key->crypto_cfg))
- return true;
- if (blk_crypto_fallback_bio_prep(bio_ptr))
- return true;
-fail:
- bio_endio(*bio_ptr);
- return false;
+ if (!blk_crypto_config_supported_natively(bdev, &bc_key->crypto_cfg)) {
+ if (!IS_ENABLED(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK)) {
+ pr_warn_once("%pg: crypto API fallback disabled; failing request.\n",
+ bdev);
+ bio->bi_status = BLK_STS_NOTSUPP;
+ bio_endio(bio);
+ return false;
+ }
+ return blk_crypto_fallback_bio_prep(bio);
+ }
+
+ return true;
}
int __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
` (3 preceding siblings ...)
2025-12-17 6:06 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-19 20:08 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 6/9] blk-crypto: use on-stack skcipher requests for fallback en/decryption Christoph Hellwig
` (3 subsequent siblings)
8 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
The current code in blk_crypto_fallback_encrypt_bio is inefficient and
prone to deadlocks under memory pressure: It first walks the passed in
plaintext bio to see how much of it can fit into a single encrypted
bio using up to BIO_MAX_VEC PAGE_SIZE segments, and then allocates a
plaintext clone that fits the size, only to allocate another bio for
the ciphertext later. While the plaintext clone uses a bioset to avoid
deadlocks when allocations could fail, the ciphertex one uses bio_kmalloc
which is a no-go in the file system I/O path.
Switch blk_crypto_fallback_encrypt_bio to walk the source plaintext bio
while consuming bi_iter without cloning it, and instead allocate a
ciphertext bio at the beginning and whenever we fille up the previous
one. The existing bio_set for the plaintext clones is reused for the
ciphertext bios to remove the deadlock risk.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
---
block/blk-crypto-fallback.c | 164 +++++++++++++++---------------------
1 file changed, 66 insertions(+), 98 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index cc9e90be23b7..59441cf7273c 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -81,7 +81,7 @@ static struct blk_crypto_fallback_keyslot {
static struct blk_crypto_profile *blk_crypto_fallback_profile;
static struct workqueue_struct *blk_crypto_wq;
static mempool_t *blk_crypto_bounce_page_pool;
-static struct bio_set crypto_bio_split;
+static struct bio_set enc_bio_set;
/*
* This is the key we set when evicting a keyslot. This *should* be the all 0's
@@ -150,37 +150,30 @@ static void blk_crypto_fallback_encrypt_endio(struct bio *enc_bio)
mempool_free(enc_bio->bi_io_vec[i].bv_page,
blk_crypto_bounce_page_pool);
- src_bio->bi_status = enc_bio->bi_status;
+ if (enc_bio->bi_status)
+ cmpxchg(&src_bio->bi_status, 0, enc_bio->bi_status);
- bio_uninit(enc_bio);
- kfree(enc_bio);
+ bio_put(enc_bio);
bio_endio(src_bio);
}
-static struct bio *blk_crypto_fallback_clone_bio(struct bio *bio_src)
+static struct bio *blk_crypto_alloc_enc_bio(struct bio *bio_src,
+ unsigned int nr_segs)
{
- unsigned int nr_segs = bio_segments(bio_src);
- struct bvec_iter iter;
- struct bio_vec bv;
struct bio *bio;
- bio = bio_kmalloc(nr_segs, GFP_NOIO);
- if (!bio)
- return NULL;
- bio_init_inline(bio, bio_src->bi_bdev, nr_segs, bio_src->bi_opf);
+ nr_segs = min(nr_segs, BIO_MAX_VECS);
+ bio = bio_alloc_bioset(bio_src->bi_bdev, nr_segs, bio_src->bi_opf,
+ GFP_NOIO, &enc_bio_set);
if (bio_flagged(bio_src, BIO_REMAPPED))
bio_set_flag(bio, BIO_REMAPPED);
+ bio->bi_private = bio_src;
+ bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
bio->bi_ioprio = bio_src->bi_ioprio;
bio->bi_write_hint = bio_src->bi_write_hint;
bio->bi_write_stream = bio_src->bi_write_stream;
bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector;
- bio->bi_iter.bi_size = bio_src->bi_iter.bi_size;
-
- bio_for_each_segment(bv, bio_src, iter)
- bio->bi_io_vec[bio->bi_vcnt++] = bv;
-
bio_clone_blkg_association(bio, bio_src);
-
return bio;
}
@@ -208,32 +201,6 @@ blk_crypto_fallback_alloc_cipher_req(struct blk_crypto_keyslot *slot,
return true;
}
-static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
-{
- struct bio *bio = *bio_ptr;
- unsigned int i = 0;
- unsigned int num_sectors = 0;
- struct bio_vec bv;
- struct bvec_iter iter;
-
- bio_for_each_segment(bv, bio, iter) {
- num_sectors += bv.bv_len >> SECTOR_SHIFT;
- if (++i == BIO_MAX_VECS)
- break;
- }
-
- if (num_sectors < bio_sectors(bio)) {
- bio = bio_submit_split_bioset(bio, num_sectors,
- &crypto_bio_split);
- if (!bio)
- return false;
-
- *bio_ptr = bio;
- }
-
- return true;
-}
-
union blk_crypto_iv {
__le64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
u8 bytes[BLK_CRYPTO_MAX_IV_SIZE];
@@ -257,46 +224,32 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
*/
static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
{
- struct bio *enc_bio;
- struct bio_crypt_ctx *bc;
- struct blk_crypto_keyslot *slot;
- int data_unit_size;
+ struct bio_crypt_ctx *bc = src_bio->bi_crypt_context;
+ int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
+ unsigned int nr_segs = bio_segments(src_bio);
struct skcipher_request *ciph_req = NULL;
+ struct blk_crypto_keyslot *slot;
DECLARE_CRYPTO_WAIT(wait);
u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
struct scatterlist src, dst;
union blk_crypto_iv iv;
- unsigned int i, j;
- blk_status_t blk_st;
-
- /* Split the bio if it's too big for single page bvec */
- if (!blk_crypto_fallback_split_bio_if_needed(&src_bio))
- goto out_endio;
-
- bc = src_bio->bi_crypt_context;
- data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
-
- /* Allocate bounce bio for encryption */
- enc_bio = blk_crypto_fallback_clone_bio(src_bio);
- if (!enc_bio) {
- src_bio->bi_status = BLK_STS_RESOURCE;
- goto out_endio;
- }
+ unsigned int enc_idx;
+ struct bio *enc_bio;
+ blk_status_t status;
+ unsigned int j;
/*
* Get a blk-crypto-fallback keyslot that contains a crypto_skcipher for
* this bio's algorithm and key.
*/
- blk_st = blk_crypto_get_keyslot(blk_crypto_fallback_profile,
+ status = blk_crypto_get_keyslot(blk_crypto_fallback_profile,
bc->bc_key, &slot);
- if (blk_st != BLK_STS_OK) {
- src_bio->bi_status = blk_st;
- goto out_put_enc_bio;
- }
+ if (status != BLK_STS_OK)
+ goto out_endio;
/* and then allocate an skcipher_request for it */
if (!blk_crypto_fallback_alloc_cipher_req(slot, &ciph_req, &wait)) {
- src_bio->bi_status = BLK_STS_RESOURCE;
+ status = BLK_STS_RESOURCE;
goto out_release_keyslot;
}
@@ -307,58 +260,73 @@ static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
skcipher_request_set_crypt(ciph_req, &src, &dst, data_unit_size,
iv.bytes);
- /* Encrypt each page in the bounce bio */
- for (i = 0; i < enc_bio->bi_vcnt; i++) {
- struct bio_vec *enc_bvec = &enc_bio->bi_io_vec[i];
- struct page *plaintext_page = enc_bvec->bv_page;
- struct page *ciphertext_page =
- mempool_alloc(blk_crypto_bounce_page_pool, GFP_NOIO);
+ /* Encrypt each page in the source bio */
+new_bio:
+ enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs);
+ enc_idx = 0;
+ for (;;) {
+ struct bio_vec src_bv =
+ bio_iter_iovec(src_bio, src_bio->bi_iter);
+ struct page *enc_page;
- enc_bvec->bv_page = ciphertext_page;
-
- if (!ciphertext_page) {
- src_bio->bi_status = BLK_STS_RESOURCE;
- goto out_free_bounce_pages;
- }
+ enc_page = mempool_alloc(blk_crypto_bounce_page_pool,
+ GFP_NOIO);
+ __bio_add_page(enc_bio, enc_page, src_bv.bv_len,
+ src_bv.bv_offset);
- sg_set_page(&src, plaintext_page, data_unit_size,
- enc_bvec->bv_offset);
- sg_set_page(&dst, ciphertext_page, data_unit_size,
- enc_bvec->bv_offset);
+ sg_set_page(&src, src_bv.bv_page, data_unit_size,
+ src_bv.bv_offset);
+ sg_set_page(&dst, enc_page, data_unit_size, src_bv.bv_offset);
/* Encrypt each data unit in this page */
- for (j = 0; j < enc_bvec->bv_len; j += data_unit_size) {
+ for (j = 0; j < src_bv.bv_len; j += data_unit_size) {
blk_crypto_dun_to_iv(curr_dun, &iv);
if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
&wait)) {
- i++;
- src_bio->bi_status = BLK_STS_IOERR;
+ enc_idx++;
+ status = BLK_STS_IOERR;
goto out_free_bounce_pages;
}
bio_crypt_dun_increment(curr_dun, 1);
src.offset += data_unit_size;
dst.offset += data_unit_size;
}
+
+ bio_advance_iter_single(src_bio, &src_bio->bi_iter,
+ src_bv.bv_len);
+ if (!src_bio->bi_iter.bi_size)
+ break;
+
+ nr_segs--;
+ if (++enc_idx == enc_bio->bi_max_vecs) {
+ /*
+ * For each additional encrypted bio submitted,
+ * increment the source bio's remaining count. Each
+ * encrypted bio's completion handler calls bio_endio on
+ * the source bio, so this keeps the source bio from
+ * completing until the last encrypted bio does.
+ */
+ bio_inc_remaining(src_bio);
+ submit_bio(enc_bio);
+ goto new_bio;
+ }
}
- enc_bio->bi_private = src_bio;
- enc_bio->bi_end_io = blk_crypto_fallback_encrypt_endio;
skcipher_request_free(ciph_req);
blk_crypto_put_keyslot(slot);
submit_bio(enc_bio);
return;
out_free_bounce_pages:
- while (i > 0)
- mempool_free(enc_bio->bi_io_vec[--i].bv_page,
+ while (enc_idx > 0)
+ mempool_free(enc_bio->bi_io_vec[--enc_idx].bv_page,
blk_crypto_bounce_page_pool);
+ bio_put(enc_bio);
skcipher_request_free(ciph_req);
out_release_keyslot:
blk_crypto_put_keyslot(slot);
-out_put_enc_bio:
- bio_uninit(enc_bio);
- kfree(enc_bio);
out_endio:
+ cmpxchg(&src_bio->bi_status, 0, status);
bio_endio(src_bio);
}
@@ -533,7 +501,7 @@ static int blk_crypto_fallback_init(void)
get_random_bytes(blank_key, sizeof(blank_key));
- err = bioset_init(&crypto_bio_split, 64, 0, 0);
+ err = bioset_init(&enc_bio_set, 64, 0, BIOSET_NEED_BVECS);
if (err)
goto out;
@@ -603,7 +571,7 @@ static int blk_crypto_fallback_init(void)
fail_free_profile:
kfree(blk_crypto_fallback_profile);
fail_free_bioset:
- bioset_exit(&crypto_bio_split);
+ bioset_exit(&enc_bio_set);
out:
return err;
}
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 6/9] blk-crypto: use on-stack skcipher requests for fallback en/decryption
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
` (4 preceding siblings ...)
2025-12-17 6:06 ` [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-17 6:06 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
` (2 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Allocating a skcipher request dynamically can deadlock or cause
unexpected I/O failures when called from writeback context. Avoid the
allocation entirely by using on-stack skciphers, similar to what the
non-blk-crypto fscrypt path already does.
This drops the incomplete support for asynchronous algorithms, which
previously could be used, but only synchronously.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
---
block/blk-crypto-fallback.c | 178 ++++++++++++++++--------------------
1 file changed, 79 insertions(+), 99 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 59441cf7273c..58b35c5d6949 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -75,7 +75,7 @@ static bool tfms_inited[BLK_ENCRYPTION_MODE_MAX];
static struct blk_crypto_fallback_keyslot {
enum blk_crypto_mode_num crypto_mode;
- struct crypto_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX];
+ struct crypto_sync_skcipher *tfms[BLK_ENCRYPTION_MODE_MAX];
} *blk_crypto_keyslots;
static struct blk_crypto_profile *blk_crypto_fallback_profile;
@@ -98,7 +98,7 @@ static void blk_crypto_fallback_evict_keyslot(unsigned int slot)
WARN_ON(slotp->crypto_mode == BLK_ENCRYPTION_MODE_INVALID);
/* Clear the key in the skcipher */
- err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], blank_key,
+ err = crypto_sync_skcipher_setkey(slotp->tfms[crypto_mode], blank_key,
blk_crypto_modes[crypto_mode].keysize);
WARN_ON(err);
slotp->crypto_mode = BLK_ENCRYPTION_MODE_INVALID;
@@ -119,7 +119,7 @@ blk_crypto_fallback_keyslot_program(struct blk_crypto_profile *profile,
blk_crypto_fallback_evict_keyslot(slot);
slotp->crypto_mode = crypto_mode;
- err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->bytes,
+ err = crypto_sync_skcipher_setkey(slotp->tfms[crypto_mode], key->bytes,
key->size);
if (err) {
blk_crypto_fallback_evict_keyslot(slot);
@@ -177,28 +177,13 @@ static struct bio *blk_crypto_alloc_enc_bio(struct bio *bio_src,
return bio;
}
-static bool
-blk_crypto_fallback_alloc_cipher_req(struct blk_crypto_keyslot *slot,
- struct skcipher_request **ciph_req_ret,
- struct crypto_wait *wait)
+static struct crypto_sync_skcipher *
+blk_crypto_fallback_tfm(struct blk_crypto_keyslot *slot)
{
- struct skcipher_request *ciph_req;
- const struct blk_crypto_fallback_keyslot *slotp;
- int keyslot_idx = blk_crypto_keyslot_index(slot);
-
- slotp = &blk_crypto_keyslots[keyslot_idx];
- ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
- GFP_NOIO);
- if (!ciph_req)
- return false;
-
- skcipher_request_set_callback(ciph_req,
- CRYPTO_TFM_REQ_MAY_BACKLOG |
- CRYPTO_TFM_REQ_MAY_SLEEP,
- crypto_req_done, wait);
- *ciph_req_ret = ciph_req;
+ const struct blk_crypto_fallback_keyslot *slotp =
+ &blk_crypto_keyslots[blk_crypto_keyslot_index(slot)];
- return true;
+ return slotp->tfms[slotp->crypto_mode];
}
union blk_crypto_iv {
@@ -215,43 +200,23 @@ static void blk_crypto_dun_to_iv(const u64 dun[BLK_CRYPTO_DUN_ARRAY_SIZE],
iv->dun[i] = cpu_to_le64(dun[i]);
}
-/*
- * The crypto API fallback's encryption routine.
- *
- * Allocate one or more bios for encryption, encrypt the input bio using the
- * crypto API, and submit the encrypted bios. Sets bio->bi_status and
- * completes the source bio on error
- */
-static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
+static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
+ struct crypto_sync_skcipher *tfm)
{
struct bio_crypt_ctx *bc = src_bio->bi_crypt_context;
int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
unsigned int nr_segs = bio_segments(src_bio);
- struct skcipher_request *ciph_req = NULL;
- struct blk_crypto_keyslot *slot;
- DECLARE_CRYPTO_WAIT(wait);
+ SYNC_SKCIPHER_REQUEST_ON_STACK(ciph_req, tfm);
u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
struct scatterlist src, dst;
union blk_crypto_iv iv;
unsigned int enc_idx;
struct bio *enc_bio;
- blk_status_t status;
unsigned int j;
- /*
- * Get a blk-crypto-fallback keyslot that contains a crypto_skcipher for
- * this bio's algorithm and key.
- */
- status = blk_crypto_get_keyslot(blk_crypto_fallback_profile,
- bc->bc_key, &slot);
- if (status != BLK_STS_OK)
- goto out_endio;
-
- /* and then allocate an skcipher_request for it */
- if (!blk_crypto_fallback_alloc_cipher_req(slot, &ciph_req, &wait)) {
- status = BLK_STS_RESOURCE;
- goto out_release_keyslot;
- }
+ skcipher_request_set_callback(ciph_req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ NULL, NULL);
memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
sg_init_table(&src, 1);
@@ -281,10 +246,8 @@ static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
/* Encrypt each data unit in this page */
for (j = 0; j < src_bv.bv_len; j += data_unit_size) {
blk_crypto_dun_to_iv(curr_dun, &iv);
- if (crypto_wait_req(crypto_skcipher_encrypt(ciph_req),
- &wait)) {
+ if (crypto_skcipher_encrypt(ciph_req)) {
enc_idx++;
- status = BLK_STS_IOERR;
goto out_free_bounce_pages;
}
bio_crypt_dun_increment(curr_dun, 1);
@@ -312,8 +275,6 @@ static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
}
}
- skcipher_request_free(ciph_req);
- blk_crypto_put_keyslot(slot);
submit_bio(enc_bio);
return;
@@ -322,52 +283,50 @@ static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
mempool_free(enc_bio->bi_io_vec[--enc_idx].bv_page,
blk_crypto_bounce_page_pool);
bio_put(enc_bio);
- skcipher_request_free(ciph_req);
-out_release_keyslot:
- blk_crypto_put_keyslot(slot);
-out_endio:
- cmpxchg(&src_bio->bi_status, 0, status);
+ cmpxchg(&src_bio->bi_status, 0, BLK_STS_IOERR);
bio_endio(src_bio);
}
/*
- * The crypto API fallback's main decryption routine.
- * Decrypts input bio in place, and calls bio_endio on the bio.
+ * The crypto API fallback's encryption routine.
+ *
+ * Allocate one or more bios for encryption, encrypt the input bio using the
+ * crypto API, and submit the encrypted bios. Sets bio->bi_status and
+ * completes the source bio on error
*/
-static void blk_crypto_fallback_decrypt_bio(struct work_struct *work)
+static void blk_crypto_fallback_encrypt_bio(struct bio *src_bio)
{
- struct bio_fallback_crypt_ctx *f_ctx =
- container_of(work, struct bio_fallback_crypt_ctx, work);
- struct bio *bio = f_ctx->bio;
- struct bio_crypt_ctx *bc = &f_ctx->crypt_ctx;
+ struct bio_crypt_ctx *bc = src_bio->bi_crypt_context;
struct blk_crypto_keyslot *slot;
- struct skcipher_request *ciph_req = NULL;
- DECLARE_CRYPTO_WAIT(wait);
+ blk_status_t status;
+
+ status = blk_crypto_get_keyslot(blk_crypto_fallback_profile,
+ bc->bc_key, &slot);
+ if (status != BLK_STS_OK) {
+ src_bio->bi_status = status;
+ bio_endio(src_bio);
+ return;
+ }
+ __blk_crypto_fallback_encrypt_bio(src_bio,
+ blk_crypto_fallback_tfm(slot));
+ blk_crypto_put_keyslot(slot);
+}
+
+static blk_status_t __blk_crypto_fallback_decrypt_bio(struct bio *bio,
+ struct bio_crypt_ctx *bc, struct bvec_iter iter,
+ struct crypto_sync_skcipher *tfm)
+{
+ SYNC_SKCIPHER_REQUEST_ON_STACK(ciph_req, tfm);
u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
union blk_crypto_iv iv;
struct scatterlist sg;
struct bio_vec bv;
- struct bvec_iter iter;
const int data_unit_size = bc->bc_key->crypto_cfg.data_unit_size;
unsigned int i;
- blk_status_t blk_st;
-
- /*
- * Get a blk-crypto-fallback keyslot that contains a crypto_skcipher for
- * this bio's algorithm and key.
- */
- blk_st = blk_crypto_get_keyslot(blk_crypto_fallback_profile,
- bc->bc_key, &slot);
- if (blk_st != BLK_STS_OK) {
- bio->bi_status = blk_st;
- goto out_no_keyslot;
- }
- /* and then allocate an skcipher_request for it */
- if (!blk_crypto_fallback_alloc_cipher_req(slot, &ciph_req, &wait)) {
- bio->bi_status = BLK_STS_RESOURCE;
- goto out;
- }
+ skcipher_request_set_callback(ciph_req,
+ CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP,
+ NULL, NULL);
memcpy(curr_dun, bc->bc_dun, sizeof(curr_dun));
sg_init_table(&sg, 1);
@@ -375,7 +334,7 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work)
iv.bytes);
/* Decrypt each segment in the bio */
- __bio_for_each_segment(bv, bio, iter, f_ctx->crypt_iter) {
+ __bio_for_each_segment(bv, bio, iter, iter) {
struct page *page = bv.bv_page;
sg_set_page(&sg, page, data_unit_size, bv.bv_offset);
@@ -383,21 +342,41 @@ static void blk_crypto_fallback_decrypt_bio(struct work_struct *work)
/* Decrypt each data unit in the segment */
for (i = 0; i < bv.bv_len; i += data_unit_size) {
blk_crypto_dun_to_iv(curr_dun, &iv);
- if (crypto_wait_req(crypto_skcipher_decrypt(ciph_req),
- &wait)) {
- bio->bi_status = BLK_STS_IOERR;
- goto out;
- }
+ if (crypto_skcipher_decrypt(ciph_req))
+ return BLK_STS_IOERR;
bio_crypt_dun_increment(curr_dun, 1);
sg.offset += data_unit_size;
}
}
-out:
- skcipher_request_free(ciph_req);
- blk_crypto_put_keyslot(slot);
-out_no_keyslot:
+ return BLK_STS_OK;
+}
+
+/*
+ * The crypto API fallback's main decryption routine.
+ *
+ * Decrypts input bio in place, and calls bio_endio on the bio.
+ */
+static void blk_crypto_fallback_decrypt_bio(struct work_struct *work)
+{
+ struct bio_fallback_crypt_ctx *f_ctx =
+ container_of(work, struct bio_fallback_crypt_ctx, work);
+ struct bio *bio = f_ctx->bio;
+ struct bio_crypt_ctx *bc = &f_ctx->crypt_ctx;
+ struct blk_crypto_keyslot *slot;
+ blk_status_t status;
+
+ status = blk_crypto_get_keyslot(blk_crypto_fallback_profile,
+ bc->bc_key, &slot);
+ if (status == BLK_STS_OK) {
+ status = __blk_crypto_fallback_decrypt_bio(bio, bc,
+ f_ctx->crypt_iter,
+ blk_crypto_fallback_tfm(slot));
+ blk_crypto_put_keyslot(slot);
+ }
mempool_free(f_ctx, bio_fallback_crypt_ctx_pool);
+
+ bio->bi_status = status;
bio_endio(bio);
}
@@ -605,7 +584,8 @@ int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
for (i = 0; i < blk_crypto_num_keyslots; i++) {
slotp = &blk_crypto_keyslots[i];
- slotp->tfms[mode_num] = crypto_alloc_skcipher(cipher_str, 0, 0);
+ slotp->tfms[mode_num] = crypto_alloc_sync_skcipher(cipher_str,
+ 0, 0);
if (IS_ERR(slotp->tfms[mode_num])) {
err = PTR_ERR(slotp->tfms[mode_num]);
if (err == -ENOENT) {
@@ -617,7 +597,7 @@ int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
goto out_free_tfms;
}
- crypto_skcipher_set_flags(slotp->tfms[mode_num],
+ crypto_sync_skcipher_set_flags(slotp->tfms[mode_num],
CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
}
@@ -631,7 +611,7 @@ int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
out_free_tfms:
for (i = 0; i < blk_crypto_num_keyslots; i++) {
slotp = &blk_crypto_keyslots[i];
- crypto_free_skcipher(slotp->tfms[mode_num]);
+ crypto_free_sync_skcipher(slotp->tfms[mode_num]);
slotp->tfms[mode_num] = NULL;
}
out:
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
` (5 preceding siblings ...)
2025-12-17 6:06 ` [PATCH 6/9] blk-crypto: use on-stack skcipher requests for fallback en/decryption Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-19 20:02 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 8/9] blk-crypto: optimize data unit alignment checking Christoph Hellwig
2025-12-17 6:06 ` [PATCH 9/9] blk-crypto: handle the fallback above the block layer Christoph Hellwig
8 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Calling mempool_alloc in a loop is not safe unless the maximum allocation
size times the maximum number of threads using it is less than the
minimum pool size. Use the new mempool_alloc_bulk helper to allocate
all missing elements in one pass to remove this deadlock risk. This
also means that non-pool allocations now use alloc_pages_bulk which can
be significantly faster than a loop over individual page allocations.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-crypto-fallback.c | 70 ++++++++++++++++++++++++++++---------
1 file changed, 53 insertions(+), 17 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 58b35c5d6949..1db4aa4d812a 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -22,7 +22,7 @@
#include "blk-cgroup.h"
#include "blk-crypto-internal.h"
-static unsigned int num_prealloc_bounce_pg = 32;
+static unsigned int num_prealloc_bounce_pg = BIO_MAX_VECS;
module_param(num_prealloc_bounce_pg, uint, 0);
MODULE_PARM_DESC(num_prealloc_bounce_pg,
"Number of preallocated bounce pages for the blk-crypto crypto API fallback");
@@ -144,11 +144,21 @@ static const struct blk_crypto_ll_ops blk_crypto_fallback_ll_ops = {
static void blk_crypto_fallback_encrypt_endio(struct bio *enc_bio)
{
struct bio *src_bio = enc_bio->bi_private;
- int i;
+ struct page **pages = (struct page **)enc_bio->bi_io_vec;
+ struct bio_vec *bv;
+ unsigned int i;
- for (i = 0; i < enc_bio->bi_vcnt; i++)
- mempool_free(enc_bio->bi_io_vec[i].bv_page,
- blk_crypto_bounce_page_pool);
+ /*
+ * Use the same trick as the alloc side to avoid the need for an extra
+ * pages array.
+ */
+ bio_for_each_bvec_all(bv, enc_bio, i)
+ pages[i] = bv->bv_page;
+
+ i = mempool_free_bulk(blk_crypto_bounce_page_pool, (void **)pages,
+ enc_bio->bi_vcnt);
+ if (i < enc_bio->bi_vcnt)
+ release_pages(pages + i, enc_bio->bi_vcnt - i);
if (enc_bio->bi_status)
cmpxchg(&src_bio->bi_status, 0, enc_bio->bi_status);
@@ -157,9 +167,14 @@ static void blk_crypto_fallback_encrypt_endio(struct bio *enc_bio)
bio_endio(src_bio);
}
+#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *))
+
static struct bio *blk_crypto_alloc_enc_bio(struct bio *bio_src,
- unsigned int nr_segs)
+ unsigned int nr_segs, struct page ***pages_ret)
{
+ unsigned int memflags = memalloc_noio_save();
+ unsigned int nr_allocated;
+ struct page **pages;
struct bio *bio;
nr_segs = min(nr_segs, BIO_MAX_VECS);
@@ -174,6 +189,30 @@ static struct bio *blk_crypto_alloc_enc_bio(struct bio *bio_src,
bio->bi_write_stream = bio_src->bi_write_stream;
bio->bi_iter.bi_sector = bio_src->bi_iter.bi_sector;
bio_clone_blkg_association(bio, bio_src);
+
+ /*
+ * Move page array up in the allocated memory for the bio vecs as far as
+ * possible so that we can start filling biovecs from the beginning
+ * without overwriting the temporary page array.
+ */
+ static_assert(PAGE_PTRS_PER_BVEC > 1);
+ pages = (struct page **)bio->bi_io_vec;
+ pages += nr_segs * (PAGE_PTRS_PER_BVEC - 1);
+
+ /*
+ * Try a bulk allocation first. This could leave random pages in the
+ * array unallocated, but we'll fix that up later in mempool_alloc_bulk.
+ *
+ * Note: alloc_pages_bulk needs the array to be zeroed, as it assumes
+ * any non-zero slot already contains a valid allocation.
+ */
+ memset(pages, 0, sizeof(struct page *) * nr_segs);
+ nr_allocated = alloc_pages_bulk(GFP_KERNEL, nr_segs, pages);
+ if (nr_allocated < nr_segs)
+ mempool_alloc_bulk(blk_crypto_bounce_page_pool, (void **)pages,
+ nr_segs, nr_allocated);
+ memalloc_noio_restore(memflags);
+ *pages_ret = pages;
return bio;
}
@@ -210,6 +249,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
u64 curr_dun[BLK_CRYPTO_DUN_ARRAY_SIZE];
struct scatterlist src, dst;
union blk_crypto_iv iv;
+ struct page **enc_pages;
unsigned int enc_idx;
struct bio *enc_bio;
unsigned int j;
@@ -227,15 +267,13 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
/* Encrypt each page in the source bio */
new_bio:
- enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs);
+ enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs, &enc_pages);
enc_idx = 0;
for (;;) {
struct bio_vec src_bv =
bio_iter_iovec(src_bio, src_bio->bi_iter);
- struct page *enc_page;
+ struct page *enc_page = enc_pages[enc_idx];
- enc_page = mempool_alloc(blk_crypto_bounce_page_pool,
- GFP_NOIO);
__bio_add_page(enc_bio, enc_page, src_bv.bv_len,
src_bv.bv_offset);
@@ -246,10 +284,8 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
/* Encrypt each data unit in this page */
for (j = 0; j < src_bv.bv_len; j += data_unit_size) {
blk_crypto_dun_to_iv(curr_dun, &iv);
- if (crypto_skcipher_encrypt(ciph_req)) {
- enc_idx++;
- goto out_free_bounce_pages;
- }
+ if (crypto_skcipher_encrypt(ciph_req))
+ goto out_free_enc_bio;
bio_crypt_dun_increment(curr_dun, 1);
src.offset += data_unit_size;
dst.offset += data_unit_size;
@@ -278,9 +314,9 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
submit_bio(enc_bio);
return;
-out_free_bounce_pages:
- while (enc_idx > 0)
- mempool_free(enc_bio->bi_io_vec[--enc_idx].bv_page,
+out_free_enc_bio:
+ for (enc_idx = 0; enc_idx < enc_bio->bi_max_vecs; enc_idx++)
+ mempool_free(enc_bio->bi_io_vec[enc_idx].bv_page,
blk_crypto_bounce_page_pool);
bio_put(enc_bio);
cmpxchg(&src_bio->bi_status, 0, BLK_STS_IOERR);
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 8/9] blk-crypto: optimize data unit alignment checking
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
` (6 preceding siblings ...)
2025-12-17 6:06 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
2025-12-19 20:14 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 9/9] blk-crypto: handle the fallback above the block layer Christoph Hellwig
8 siblings, 1 reply; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Avoid the relatively high overhead of constructing and walking per-page
segment bio_vecs for data unit alignment checking by merging the checks
into existing loops.
For hardware support crypto, perform the check in bio_split_io_at, which
already contains a similar alignment check applied for all I/O. This
means bio-based drivers that do not call bio_split_to_limits, should they
ever grow blk-crypto support, need to implement the check themselves,
just like all other queue limits checks.
For blk-crypto-fallback do it in the encryption/decryption loops. This
means alignment errors for decryption will only be detected after I/O
has completed, but that seems like a worthwhile trade off.
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
block/blk-crypto-fallback.c | 14 ++++++++++++--
block/blk-crypto.c | 22 ----------------------
block/blk-merge.c | 9 ++++++++-
3 files changed, 20 insertions(+), 25 deletions(-)
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 1db4aa4d812a..23e097197450 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -274,6 +274,12 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
bio_iter_iovec(src_bio, src_bio->bi_iter);
struct page *enc_page = enc_pages[enc_idx];
+ if (!IS_ALIGNED(src_bv.bv_len | src_bv.bv_offset,
+ data_unit_size)) {
+ cmpxchg(&src_bio->bi_status, 0, BLK_STS_INVAL);
+ goto out_free_enc_bio;
+ }
+
__bio_add_page(enc_bio, enc_page, src_bv.bv_len,
src_bv.bv_offset);
@@ -284,8 +290,10 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
/* Encrypt each data unit in this page */
for (j = 0; j < src_bv.bv_len; j += data_unit_size) {
blk_crypto_dun_to_iv(curr_dun, &iv);
- if (crypto_skcipher_encrypt(ciph_req))
+ if (crypto_skcipher_encrypt(ciph_req)) {
+ cmpxchg(&src_bio->bi_status, 0, BLK_STS_IOERR);
goto out_free_enc_bio;
+ }
bio_crypt_dun_increment(curr_dun, 1);
src.offset += data_unit_size;
dst.offset += data_unit_size;
@@ -319,7 +327,6 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
mempool_free(enc_bio->bi_io_vec[enc_idx].bv_page,
blk_crypto_bounce_page_pool);
bio_put(enc_bio);
- cmpxchg(&src_bio->bi_status, 0, BLK_STS_IOERR);
bio_endio(src_bio);
}
@@ -373,6 +380,9 @@ static blk_status_t __blk_crypto_fallback_decrypt_bio(struct bio *bio,
__bio_for_each_segment(bv, bio, iter, iter) {
struct page *page = bv.bv_page;
+ if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
+ return BLK_STS_INVAL;
+
sg_set_page(&sg, page, data_unit_size, bv.bv_offset);
/* Decrypt each data unit in the segment */
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 69e869d1c9bd..0b2535d8dbcc 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -219,22 +219,6 @@ bool bio_crypt_ctx_mergeable(struct bio_crypt_ctx *bc1, unsigned int bc1_bytes,
return !bc1 || bio_crypt_dun_is_contiguous(bc1, bc1_bytes, bc2->bc_dun);
}
-/* Check that all I/O segments are data unit aligned. */
-static bool bio_crypt_check_alignment(struct bio *bio)
-{
- const unsigned int data_unit_size =
- bio->bi_crypt_context->bc_key->crypto_cfg.data_unit_size;
- struct bvec_iter iter;
- struct bio_vec bv;
-
- bio_for_each_segment(bv, bio, iter) {
- if (!IS_ALIGNED(bv.bv_len | bv.bv_offset, data_unit_size))
- return false;
- }
-
- return true;
-}
-
blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq)
{
return blk_crypto_get_keyslot(rq->q->crypto_profile,
@@ -287,12 +271,6 @@ bool __blk_crypto_bio_prep(struct bio *bio)
return false;
}
- if (!bio_crypt_check_alignment(bio)) {
- bio->bi_status = BLK_STS_INVAL;
- bio_endio(bio);
- return false;
- }
-
/*
* If the device does not natively support the encryption context, try to use
* the fallback if available.
diff --git a/block/blk-merge.c b/block/blk-merge.c
index d3115d7469df..b82c6d304658 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -324,12 +324,19 @@ static inline unsigned int bvec_seg_gap(struct bio_vec *bvprv,
int bio_split_io_at(struct bio *bio, const struct queue_limits *lim,
unsigned *segs, unsigned max_bytes, unsigned len_align_mask)
{
+ struct bio_crypt_ctx *bc = bio_crypt_ctx(bio);
struct bio_vec bv, bvprv, *bvprvp = NULL;
unsigned nsegs = 0, bytes = 0, gaps = 0;
struct bvec_iter iter;
+ unsigned start_align_mask = lim->dma_alignment;
+
+ if (bc) {
+ start_align_mask |= (bc->bc_key->crypto_cfg.data_unit_size - 1);
+ len_align_mask |= (bc->bc_key->crypto_cfg.data_unit_size - 1);
+ }
bio_for_each_bvec(bv, bio, iter) {
- if (bv.bv_offset & lim->dma_alignment ||
+ if (bv.bv_offset & start_align_mask ||
bv.bv_len & len_align_mask)
return -EINVAL;
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 9/9] blk-crypto: handle the fallback above the block layer
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
` (7 preceding siblings ...)
2025-12-17 6:06 ` [PATCH 8/9] blk-crypto: optimize data unit alignment checking Christoph Hellwig
@ 2025-12-17 6:06 ` Christoph Hellwig
8 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-17 6:06 UTC (permalink / raw)
To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt
Add a blk_crypto_submit_bio helper that either submits the bio when
it is not encrypted or inline encryption is provided, but otherwise
handles the encryption before going down into the low-level driver.
This reduces the risk from bio reordering and keeps memory allocation
as high up in the stack as possible.
Note that if the submitter knows that inline enctryption is known to
be supported by the underyling driver, it can still use plain
submit_bio.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
---
Documentation/block/inline-encryption.rst | 6 ++++++
block/blk-core.c | 10 +++++++---
block/blk-crypto-internal.h | 19 +++++++++++--------
block/blk-crypto.c | 23 ++++++-----------------
fs/buffer.c | 3 ++-
fs/crypto/bio.c | 2 +-
fs/ext4/page-io.c | 3 ++-
fs/ext4/readpage.c | 9 +++++----
fs/f2fs/data.c | 4 ++--
fs/f2fs/file.c | 3 ++-
fs/iomap/direct-io.c | 3 ++-
include/linux/blk-crypto.h | 22 ++++++++++++++++++++++
12 files changed, 68 insertions(+), 39 deletions(-)
diff --git a/Documentation/block/inline-encryption.rst b/Documentation/block/inline-encryption.rst
index 6380e6ab492b..7e0703a12dfb 100644
--- a/Documentation/block/inline-encryption.rst
+++ b/Documentation/block/inline-encryption.rst
@@ -206,6 +206,12 @@ it to a bio, given the blk_crypto_key and the data unit number that will be used
for en/decryption. Users don't need to worry about freeing the bio_crypt_ctx
later, as that happens automatically when the bio is freed or reset.
+To submit a bio that uses inline encryption, users must call
+``blk_crypto_submit_bio()`` instead of the usual ``submit_bio()``. This will
+submit the bio to the underlying driver if it supports inline crypto, or else
+call the blk-crypto fallback routines before submitting normal bios to the
+underlying drivers.
+
Finally, when done using inline encryption with a blk_crypto_key on a
block_device, users must call ``blk_crypto_evict_key()``. This ensures that
the key is evicted from all keyslots it may be programmed into and unlinked from
diff --git a/block/blk-core.c b/block/blk-core.c
index f87e5f1a101f..a0bf5174e9e9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -628,9 +628,6 @@ static void __submit_bio(struct bio *bio)
/* If plug is not used, add new plug here to cache nsecs time. */
struct blk_plug plug;
- if (unlikely(!blk_crypto_bio_prep(bio)))
- return;
-
blk_start_plug(&plug);
if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
@@ -794,6 +791,13 @@ void submit_bio_noacct(struct bio *bio)
if ((bio->bi_opf & REQ_NOWAIT) && !bdev_nowait(bdev))
goto not_supported;
+ if (bio_has_crypt_ctx(bio)) {
+ if (WARN_ON_ONCE(!bio_has_data(bio)))
+ goto end_io;
+ if (!blk_crypto_supported(bio))
+ goto not_supported;
+ }
+
if (should_fail_bio(bio))
goto end_io;
bio_check_ro(bio);
diff --git a/block/blk-crypto-internal.h b/block/blk-crypto-internal.h
index d65023120341..742694213529 100644
--- a/block/blk-crypto-internal.h
+++ b/block/blk-crypto-internal.h
@@ -86,6 +86,12 @@ bool __blk_crypto_cfg_supported(struct blk_crypto_profile *profile,
int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd,
void __user *argp);
+static inline bool blk_crypto_supported(struct bio *bio)
+{
+ return blk_crypto_config_supported_natively(bio->bi_bdev,
+ &bio->bi_crypt_context->bc_key->crypto_cfg);
+}
+
#else /* CONFIG_BLK_INLINE_ENCRYPTION */
static inline int blk_crypto_sysfs_register(struct gendisk *disk)
@@ -139,6 +145,11 @@ static inline int blk_crypto_ioctl(struct block_device *bdev, unsigned int cmd,
return -ENOTTY;
}
+static inline bool blk_crypto_supported(struct bio *bio)
+{
+ return false;
+}
+
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
void __bio_crypt_advance(struct bio *bio, unsigned int bytes);
@@ -165,14 +176,6 @@ static inline void bio_crypt_do_front_merge(struct request *rq,
#endif
}
-bool __blk_crypto_bio_prep(struct bio *bio);
-static inline bool blk_crypto_bio_prep(struct bio *bio)
-{
- if (bio_has_crypt_ctx(bio))
- return __blk_crypto_bio_prep(bio);
- return true;
-}
-
blk_status_t __blk_crypto_rq_get_keyslot(struct request *rq);
static inline blk_status_t blk_crypto_rq_get_keyslot(struct request *rq)
{
diff --git a/block/blk-crypto.c b/block/blk-crypto.c
index 0b2535d8dbcc..856d3c5b1fa0 100644
--- a/block/blk-crypto.c
+++ b/block/blk-crypto.c
@@ -242,25 +242,13 @@ void __blk_crypto_free_request(struct request *rq)
rq->crypt_ctx = NULL;
}
-/**
- * __blk_crypto_bio_prep - Prepare bio for inline encryption
- * @bio: bio to prepare
- *
- * If the bio crypt context provided for the bio is supported by the underlying
- * device's inline encryption hardware, do nothing.
- *
- * Otherwise, try to perform en/decryption for this bio by falling back to the
- * kernel crypto API. For encryption this means submitting newly allocated
- * bios for the encrypted payload while keeping back the source bio until they
- * complete, while for reads the decryption happens in-place by a hooked in
- * completion handler.
- *
- * Caller must ensure bio has bio_crypt_ctx.
+/*
+ * Process a bio with a crypto context. Returns true if the caller should
+ * submit the passed in bio, false if the bio is consumed.
*
- * Return: true if @bio should be submitted to the driver by the caller, else
- * false. Sets bio->bi_status, calls bio_endio and returns false on error.
+ * See the kerneldoc comment for blk_crypto_submit_bio for further details.
*/
-bool __blk_crypto_bio_prep(struct bio *bio)
+bool __blk_crypto_submit_bio(struct bio *bio)
{
const struct blk_crypto_key *bc_key = bio->bi_crypt_context->bc_key;
struct block_device *bdev = bio->bi_bdev;
@@ -288,6 +276,7 @@ bool __blk_crypto_bio_prep(struct bio *bio)
return true;
}
+EXPORT_SYMBOL_GPL(__blk_crypto_submit_bio);
int __blk_crypto_rq_bio_prep(struct request *rq, struct bio *bio,
gfp_t gfp_mask)
diff --git a/fs/buffer.c b/fs/buffer.c
index 838c0c571022..da18053f66e8 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -29,6 +29,7 @@
#include <linux/slab.h>
#include <linux/capability.h>
#include <linux/blkdev.h>
+#include <linux/blk-crypto.h>
#include <linux/file.h>
#include <linux/quotaops.h>
#include <linux/highmem.h>
@@ -2821,7 +2822,7 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
wbc_account_cgroup_owner(wbc, bh->b_folio, bh->b_size);
}
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
}
void submit_bh(blk_opf_t opf, struct buffer_head *bh)
diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c
index c2b3ca100f8d..6da683ea69dc 100644
--- a/fs/crypto/bio.c
+++ b/fs/crypto/bio.c
@@ -105,7 +105,7 @@ static int fscrypt_zeroout_range_inline_crypt(const struct inode *inode,
}
atomic_inc(&done.pending);
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
}
fscrypt_zeroout_range_done(&done);
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 39abfeec5f36..a8c95eee91b7 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -7,6 +7,7 @@
* Written by Theodore Ts'o, 2010.
*/
+#include <linux/blk-crypto.h>
#include <linux/fs.h>
#include <linux/time.h>
#include <linux/highuid.h>
@@ -401,7 +402,7 @@ void ext4_io_submit(struct ext4_io_submit *io)
if (bio) {
if (io->io_wbc->sync_mode == WB_SYNC_ALL)
io->io_bio->bi_opf |= REQ_SYNC;
- submit_bio(io->io_bio);
+ blk_crypto_submit_bio(io->io_bio);
}
io->io_bio = NULL;
}
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index e7f2350c725b..49a6d36a8dba 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -36,6 +36,7 @@
#include <linux/bio.h>
#include <linux/fs.h>
#include <linux/buffer_head.h>
+#include <linux/blk-crypto.h>
#include <linux/blkdev.h>
#include <linux/highmem.h>
#include <linux/prefetch.h>
@@ -345,7 +346,7 @@ int ext4_mpage_readpages(struct inode *inode,
if (bio && (last_block_in_bio != first_block - 1 ||
!fscrypt_mergeable_bio(bio, inode, next_block))) {
submit_and_realloc:
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
bio = NULL;
}
if (bio == NULL) {
@@ -371,14 +372,14 @@ int ext4_mpage_readpages(struct inode *inode,
if (((map.m_flags & EXT4_MAP_BOUNDARY) &&
(relative_block == map.m_len)) ||
(first_hole != blocks_per_folio)) {
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
bio = NULL;
} else
last_block_in_bio = first_block + blocks_per_folio - 1;
continue;
confused:
if (bio) {
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
bio = NULL;
}
if (!folio_test_uptodate(folio))
@@ -389,7 +390,7 @@ int ext4_mpage_readpages(struct inode *inode,
; /* A label shall be followed by a statement until C23 */
}
if (bio)
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
return 0;
}
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index c30e69392a62..c3dd8a5c8589 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -513,7 +513,7 @@ void f2fs_submit_read_bio(struct f2fs_sb_info *sbi, struct bio *bio,
trace_f2fs_submit_read_bio(sbi->sb, type, bio);
iostat_update_submit_ctx(bio, type);
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
}
static void f2fs_submit_write_bio(struct f2fs_sb_info *sbi, struct bio *bio,
@@ -522,7 +522,7 @@ static void f2fs_submit_write_bio(struct f2fs_sb_info *sbi, struct bio *bio,
WARN_ON_ONCE(is_read_io(bio_op(bio)));
trace_f2fs_submit_write_bio(sbi->sb, type, bio);
iostat_update_submit_ctx(bio, type);
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
}
static void __submit_merged_bio(struct f2fs_bio_info *io)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index d7047ca6b98d..914790f37915 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -5,6 +5,7 @@
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*/
+#include <linux/blk-crypto.h>
#include <linux/fs.h>
#include <linux/f2fs_fs.h>
#include <linux/stat.h>
@@ -5046,7 +5047,7 @@ static void f2fs_dio_write_submit_io(const struct iomap_iter *iter,
enum temp_type temp = f2fs_get_segment_temp(sbi, type);
bio->bi_write_hint = f2fs_io_type_to_rw_hint(sbi, DATA, temp);
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
}
static const struct iomap_dio_ops f2fs_iomap_dio_write_ops = {
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 8e273408453a..4000c8596d9b 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -3,6 +3,7 @@
* Copyright (C) 2010 Red Hat, Inc.
* Copyright (c) 2016-2025 Christoph Hellwig.
*/
+#include <linux/blk-crypto.h>
#include <linux/fscrypt.h>
#include <linux/pagemap.h>
#include <linux/iomap.h>
@@ -74,7 +75,7 @@ static void iomap_dio_submit_bio(const struct iomap_iter *iter,
dio->dops->submit_io(iter, bio, pos);
} else {
WARN_ON_ONCE(iter->iomap.flags & IOMAP_F_ANON_WRITE);
- submit_bio(bio);
+ blk_crypto_submit_bio(bio);
}
}
diff --git a/include/linux/blk-crypto.h b/include/linux/blk-crypto.h
index eb80df19be68..f7c3cb4a342f 100644
--- a/include/linux/blk-crypto.h
+++ b/include/linux/blk-crypto.h
@@ -181,6 +181,28 @@ static inline struct bio_crypt_ctx *bio_crypt_ctx(struct bio *bio)
#endif /* CONFIG_BLK_INLINE_ENCRYPTION */
+bool __blk_crypto_submit_bio(struct bio *bio);
+
+/**
+ * blk_crypto_submit_bio - Submit a bio that may have a crypto context
+ * @bio: bio to submit
+ *
+ * If @bio has no crypto context, or the crypt context attached to @bio is
+ * supported by the underlying device's inline encryption hardware, just submit
+ * @bio.
+ *
+ * Otherwise, try to perform en/decryption for this bio by falling back to the
+ * kernel crypto API. For encryption this means submitting newly allocated
+ * bios for the encrypted payload while keeping back the source bio until they
+ * complete, while for reads the decryption happens in-place by a hooked in
+ * completion handler.
+ */
+static inline void blk_crypto_submit_bio(struct bio *bio)
+{
+ if (!bio_has_crypt_ctx(bio) || __blk_crypto_submit_bio(bio))
+ submit_bio(bio);
+}
+
int __bio_crypt_clone(struct bio *dst, struct bio *src, gfp_t gfp_mask);
/**
* bio_crypt_clone - clone bio encryption context
--
2.47.3
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper
2025-12-17 6:06 ` [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper Christoph Hellwig
@ 2025-12-19 19:50 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-12-19 19:50 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Wed, Dec 17, 2025 at 07:06:46AM +0100, Christoph Hellwig wrote:
> This returns the bio_crypt_ctx if CONFIG_BLK_INLINE_ENCRYPTION is enabled
> and a crypto context is attached to the bio, else NULL.
>
> The use case is to allow safely dereferencing the context in common code
> without needed #ifdef CONFIG_BLK_INLINE_ENCRYPTION.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> include/linux/blk-crypto.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep
2025-12-17 6:06 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
@ 2025-12-19 19:50 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-12-19 19:50 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Wed, Dec 17, 2025 at 07:06:47AM +0100, Christoph Hellwig wrote:
> Restructure blk_crypto_fallback_bio_prep so that it always submits the
> encrypted bio instead of passing it back to the caller, which allows
> to simplify the calling conventions for blk_crypto_fallback_bio_prep and
> blk_crypto_bio_prep so that they never have to return a bio, and can
> use a true return value to indicate that the caller should submit the
> bio, and false that the blk-crypto code consumed it.
>
> The submission is handled by the on-stack bio list in the current
> task_struct by the block layer and does not cause additional stack
> usage or major overhead. It also prepares for the following optimization
> and fixes for the blk-crypto fallback write path.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/blk-core.c | 2 +-
> block/blk-crypto-fallback.c | 70 +++++++++++++++++--------------------
> block/blk-crypto-internal.h | 19 ++++------
> block/blk-crypto.c | 53 ++++++++++++++--------------
> 4 files changed, 67 insertions(+), 77 deletions(-)
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation
2025-12-17 6:06 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
@ 2025-12-19 20:02 ` Eric Biggers
2025-12-19 20:25 ` Eric Biggers
2025-12-22 22:18 ` Christoph Hellwig
0 siblings, 2 replies; 21+ messages in thread
From: Eric Biggers @ 2025-12-19 20:02 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Wed, Dec 17, 2025 at 07:06:50AM +0100, Christoph Hellwig wrote:
> new_bio:
> - enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs);
> + enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs, &enc_pages);
> enc_idx = 0;
> for (;;) {
> struct bio_vec src_bv =
> bio_iter_iovec(src_bio, src_bio->bi_iter);
> - struct page *enc_page;
> + struct page *enc_page = enc_pages[enc_idx];
>
> - enc_page = mempool_alloc(blk_crypto_bounce_page_pool,
> - GFP_NOIO);
> __bio_add_page(enc_bio, enc_page, src_bv.bv_len,
> src_bv.bv_offset);
>
> @@ -246,10 +284,8 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> /* Encrypt each data unit in this page */
> for (j = 0; j < src_bv.bv_len; j += data_unit_size) {
> blk_crypto_dun_to_iv(curr_dun, &iv);
> - if (crypto_skcipher_encrypt(ciph_req)) {
> - enc_idx++;
> - goto out_free_bounce_pages;
> - }
> + if (crypto_skcipher_encrypt(ciph_req))
> + goto out_free_enc_bio;
> bio_crypt_dun_increment(curr_dun, 1);
> src.offset += data_unit_size;
> dst.offset += data_unit_size;
> @@ -278,9 +314,9 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> submit_bio(enc_bio);
> return;
>
> -out_free_bounce_pages:
> - while (enc_idx > 0)
> - mempool_free(enc_bio->bi_io_vec[--enc_idx].bv_page,
> +out_free_enc_bio:
> + for (enc_idx = 0; enc_idx < enc_bio->bi_max_vecs; enc_idx++)
> + mempool_free(enc_bio->bi_io_vec[enc_idx].bv_page,
> blk_crypto_bounce_page_pool);
> bio_put(enc_bio);
> cmpxchg(&src_bio->bi_status, 0, BLK_STS_IOERR);
The error handling at out_free_enc_bio is still broken, I'm afraid.
It's not taking into account that some of the pages may have been moved
into bvecs and some have not.
I think it needs something like the following:
diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 23e097197450..d6760404b76c 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -272,7 +272,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
for (;;) {
struct bio_vec src_bv =
bio_iter_iovec(src_bio, src_bio->bi_iter);
- struct page *enc_page = enc_pages[enc_idx];
+ struct page *enc_page;
if (!IS_ALIGNED(src_bv.bv_len | src_bv.bv_offset,
data_unit_size)) {
@@ -280,6 +280,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
goto out_free_enc_bio;
}
+ enc_page = enc_pages[enc_idx++];
__bio_add_page(enc_bio, enc_page, src_bv.bv_len,
src_bv.bv_offset);
@@ -305,7 +306,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
break;
nr_segs--;
- if (++enc_idx == enc_bio->bi_max_vecs) {
+ if (enc_idx == enc_bio->bi_max_vecs) {
/*
* For each additional encrypted bio submitted,
* increment the source bio's remaining count. Each
@@ -323,9 +324,11 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
return;
out_free_enc_bio:
- for (enc_idx = 0; enc_idx < enc_bio->bi_max_vecs; enc_idx++)
+ for (j = 0; j < enc_idx; j++)
mempool_free(enc_bio->bi_io_vec[j].bv_page,
blk_crypto_bounce_page_pool);
+ for (; j < enc_bio->bi_max_vecs; j++)
+ mempool_free(enc_pages[j], blk_crypto_bounce_page_pool);
bio_put(enc_bio);
bio_endio(src_bio);
}
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio
2025-12-17 6:06 ` [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio Christoph Hellwig
@ 2025-12-19 20:08 ` Eric Biggers
2025-12-22 22:12 ` Christoph Hellwig
0 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2025-12-19 20:08 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Wed, Dec 17, 2025 at 07:06:48AM +0100, Christoph Hellwig wrote:
> + if (++enc_idx == enc_bio->bi_max_vecs) {
> + /*
> + * For each additional encrypted bio submitted,
> + * increment the source bio's remaining count. Each
> + * encrypted bio's completion handler calls bio_endio on
> + * the source bio, so this keeps the source bio from
> + * completing until the last encrypted bio does.
> + */
> + bio_inc_remaining(src_bio);
> + submit_bio(enc_bio);
> + goto new_bio;
> + }
Actually I think using bi_max_vecs is broken.
This code assumes that bi_max_vecs matches the nr_segs that was passed
to bio_alloc_bioset().
That assumption is incorrect, though. If nr_segs > 0 && nr_segs <
BIO_INLINE_VECS, bio_alloc_bioset() sets bi_max_vecs to BIO_INLINE_VECS.
BIO_INLINE_VECS is 4.
I think blk_crypto_alloc_enc_bio() will need to return a nr_enc_pages
value. That value will need to be used above as well as at
out_free_enc_bio, instead of bi_max_vecs.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 8/9] blk-crypto: optimize data unit alignment checking
2025-12-17 6:06 ` [PATCH 8/9] blk-crypto: optimize data unit alignment checking Christoph Hellwig
@ 2025-12-19 20:14 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-12-19 20:14 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Wed, Dec 17, 2025 at 07:06:51AM +0100, Christoph Hellwig wrote:
> Avoid the relatively high overhead of constructing and walking per-page
> segment bio_vecs for data unit alignment checking by merging the checks
> into existing loops.
>
> For hardware support crypto, perform the check in bio_split_io_at, which
> already contains a similar alignment check applied for all I/O. This
> means bio-based drivers that do not call bio_split_to_limits, should they
> ever grow blk-crypto support, need to implement the check themselves,
> just like all other queue limits checks.
>
> For blk-crypto-fallback do it in the encryption/decryption loops. This
> means alignment errors for decryption will only be detected after I/O
> has completed, but that seems like a worthwhile trade off.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/blk-crypto-fallback.c | 14 ++++++++++++--
> block/blk-crypto.c | 22 ----------------------
> block/blk-merge.c | 9 ++++++++-
> 3 files changed, 20 insertions(+), 25 deletions(-)
Reviewed-by: Eric Biggers <ebiggers@kernel.org>
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation
2025-12-19 20:02 ` Eric Biggers
@ 2025-12-19 20:25 ` Eric Biggers
2025-12-22 22:16 ` Christoph Hellwig
2025-12-22 22:18 ` Christoph Hellwig
1 sibling, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2025-12-19 20:25 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, linux-fsdevel, linux-fscrypt
On Fri, Dec 19, 2025 at 12:02:44PM -0800, Eric Biggers wrote:
> On Wed, Dec 17, 2025 at 07:06:50AM +0100, Christoph Hellwig wrote:
> > new_bio:
> > - enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs);
> > + enc_bio = blk_crypto_alloc_enc_bio(src_bio, nr_segs, &enc_pages);
> > enc_idx = 0;
> > for (;;) {
> > struct bio_vec src_bv =
> > bio_iter_iovec(src_bio, src_bio->bi_iter);
> > - struct page *enc_page;
> > + struct page *enc_page = enc_pages[enc_idx];
> >
> > - enc_page = mempool_alloc(blk_crypto_bounce_page_pool,
> > - GFP_NOIO);
> > __bio_add_page(enc_bio, enc_page, src_bv.bv_len,
> > src_bv.bv_offset);
> >
> > @@ -246,10 +284,8 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> > /* Encrypt each data unit in this page */
> > for (j = 0; j < src_bv.bv_len; j += data_unit_size) {
> > blk_crypto_dun_to_iv(curr_dun, &iv);
> > - if (crypto_skcipher_encrypt(ciph_req)) {
> > - enc_idx++;
> > - goto out_free_bounce_pages;
> > - }
> > + if (crypto_skcipher_encrypt(ciph_req))
> > + goto out_free_enc_bio;
> > bio_crypt_dun_increment(curr_dun, 1);
> > src.offset += data_unit_size;
> > dst.offset += data_unit_size;
> > @@ -278,9 +314,9 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> > submit_bio(enc_bio);
> > return;
> >
> > -out_free_bounce_pages:
> > - while (enc_idx > 0)
> > - mempool_free(enc_bio->bi_io_vec[--enc_idx].bv_page,
> > +out_free_enc_bio:
> > + for (enc_idx = 0; enc_idx < enc_bio->bi_max_vecs; enc_idx++)
> > + mempool_free(enc_bio->bi_io_vec[enc_idx].bv_page,
> > blk_crypto_bounce_page_pool);
> > bio_put(enc_bio);
> > cmpxchg(&src_bio->bi_status, 0, BLK_STS_IOERR);
>
> The error handling at out_free_enc_bio is still broken, I'm afraid.
> It's not taking into account that some of the pages may have been moved
> into bvecs and some have not.
>
> I think it needs something like the following:
>
> diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
> index 23e097197450..d6760404b76c 100644
> --- a/block/blk-crypto-fallback.c
> +++ b/block/blk-crypto-fallback.c
> @@ -272,7 +272,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> for (;;) {
> struct bio_vec src_bv =
> bio_iter_iovec(src_bio, src_bio->bi_iter);
> - struct page *enc_page = enc_pages[enc_idx];
> + struct page *enc_page;
>
> if (!IS_ALIGNED(src_bv.bv_len | src_bv.bv_offset,
> data_unit_size)) {
> @@ -280,6 +280,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> goto out_free_enc_bio;
> }
>
> + enc_page = enc_pages[enc_idx++];
> __bio_add_page(enc_bio, enc_page, src_bv.bv_len,
> src_bv.bv_offset);
>
> @@ -305,7 +306,7 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> break;
>
> nr_segs--;
> - if (++enc_idx == enc_bio->bi_max_vecs) {
> + if (enc_idx == enc_bio->bi_max_vecs) {
> /*
> * For each additional encrypted bio submitted,
> * increment the source bio's remaining count. Each
> @@ -323,9 +324,11 @@ static void __blk_crypto_fallback_encrypt_bio(struct bio *src_bio,
> return;
>
> out_free_enc_bio:
> - for (enc_idx = 0; enc_idx < enc_bio->bi_max_vecs; enc_idx++)
> + for (j = 0; j < enc_idx; j++)
> mempool_free(enc_bio->bi_io_vec[j].bv_page,
> blk_crypto_bounce_page_pool);
> + for (; j < enc_bio->bi_max_vecs; j++)
> + mempool_free(enc_pages[j], blk_crypto_bounce_page_pool);
> bio_put(enc_bio);
> bio_endio(src_bio);
> }
Also, this shows that the decrement of 'nr_segs' is a bit out-of-place
(as was 'enc_idx'). nr_segs is used only when allocating a bio, so it
could be decremented only when starting a new one:
submit_bio(enc_bio);
nr_segs -= nr_enc_pages;
goto new_bio;
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio
2025-12-19 20:08 ` Eric Biggers
@ 2025-12-22 22:12 ` Christoph Hellwig
0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-22 22:12 UTC (permalink / raw)
To: Eric Biggers
Cc: Christoph Hellwig, Jens Axboe, linux-block, linux-fsdevel,
linux-fscrypt
On Fri, Dec 19, 2025 at 12:08:37PM -0800, Eric Biggers wrote:
> Actually I think using bi_max_vecs is broken.
>
> This code assumes that bi_max_vecs matches the nr_segs that was passed
> to bio_alloc_bioset().
>
> That assumption is incorrect, though. If nr_segs > 0 && nr_segs <
> BIO_INLINE_VECS, bio_alloc_bioset() sets bi_max_vecs to BIO_INLINE_VECS.
> BIO_INLINE_VECS is 4.
>
> I think blk_crypto_alloc_enc_bio() will need to return a nr_enc_pages
> value. That value will need to be used above as well as at
> out_free_enc_bio, instead of bi_max_vecs.
A bigger bi_max_vecs should not a problem as we still have a terminating
condition based on the source bio iterator. That being said I agree
it is not very nice, and I've reworked the code to keep a variable
counting the segments in the encrypted bio.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation
2025-12-19 20:25 ` Eric Biggers
@ 2025-12-22 22:16 ` Christoph Hellwig
0 siblings, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-22 22:16 UTC (permalink / raw)
To: Eric Biggers
Cc: Christoph Hellwig, Jens Axboe, linux-block, linux-fsdevel,
linux-fscrypt
On Fri, Dec 19, 2025 at 12:25:33PM -0800, Eric Biggers wrote:
>
> Also, this shows that the decrement of 'nr_segs' is a bit out-of-place
> (as was 'enc_idx'). nr_segs is used only when allocating a bio, so it
> could be decremented only when starting a new one:
>
> submit_bio(enc_bio);
> nr_segs -= nr_enc_pages;
> goto new_bio;
I've just killed nr_segs entirely. While bio_segments() is a bit
expensive, it is totally shadowed by encrypting an entire bio.
This simplifies things quite a bit.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation
2025-12-19 20:02 ` Eric Biggers
2025-12-19 20:25 ` Eric Biggers
@ 2025-12-22 22:18 ` Christoph Hellwig
1 sibling, 0 replies; 21+ messages in thread
From: Christoph Hellwig @ 2025-12-22 22:18 UTC (permalink / raw)
To: Eric Biggers
Cc: Christoph Hellwig, Jens Axboe, linux-block, linux-fsdevel,
linux-fscrypt
On Fri, Dec 19, 2025 at 12:02:44PM -0800, Eric Biggers wrote:
> The error handling at out_free_enc_bio is still broken, I'm afraid.
> It's not taking into account that some of the pages may have been moved
> into bvecs and some have not.
>
> I think it needs something like the following:
That will now leak the pages that were successfully added to the bio.
I end up with a version that just adds the pages to the bio even
on failure. I've pushed the branch here:
https://git.infradead.org/?p=users/hch/misc.git;a=shortlog;h=refs/heads/blk-crypto-fallback
but I plan to come up with error injection to actually test this
patch given the amount of trouble it caused.
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-12-22 22:18 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17 6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
2025-12-17 6:06 ` [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt Christoph Hellwig
2025-12-17 6:06 ` [PATCH 2/9] fscrypt: keep multiple bios in flight in fscrypt_zeroout_range_inline_crypt Christoph Hellwig
2025-12-17 6:06 ` [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper Christoph Hellwig
2025-12-19 19:50 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
2025-12-19 19:50 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio Christoph Hellwig
2025-12-19 20:08 ` Eric Biggers
2025-12-22 22:12 ` Christoph Hellwig
2025-12-17 6:06 ` [PATCH 6/9] blk-crypto: use on-stack skcipher requests for fallback en/decryption Christoph Hellwig
2025-12-17 6:06 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
2025-12-19 20:02 ` Eric Biggers
2025-12-19 20:25 ` Eric Biggers
2025-12-22 22:16 ` Christoph Hellwig
2025-12-22 22:18 ` Christoph Hellwig
2025-12-17 6:06 ` [PATCH 8/9] blk-crypto: optimize data unit alignment checking Christoph Hellwig
2025-12-19 20:14 ` Eric Biggers
2025-12-17 6:06 ` [PATCH 9/9] blk-crypto: handle the fallback above the block layer Christoph Hellwig
-- strict thread matches above, loose matches on Subject: below --
2025-12-10 15:23 move blk-crypto-fallback to sit above the block layer v2 Christoph Hellwig
2025-12-10 15:23 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
2025-12-13 0:48 ` Eric Biggers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).