linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* move blk-crypto-fallback to sit above the block layer v3
@ 2025-12-17  6:06 Christoph Hellwig
  2025-12-17  6:06 ` [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt Christoph Hellwig
                   ` (8 more replies)
  0 siblings, 9 replies; 24+ messages in thread
From: Christoph Hellwig @ 2025-12-17  6:06 UTC (permalink / raw)
  To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt

Hi all,

in the past we had various discussions that doing the blk-crypto fallback
below the block layer causes all kinds of problems due to very late
splitting and communicating up features.

This series turns that call chain upside down by requiring the caller to
call into blk-crypto using a new submit_bio wrapper instead so that only
hardware encryption bios are passed through the block layer as such.

While doings this I also noticed that the existing blk-crypto-fallback
code does various unprotected memory allocations which this converts to
mempools, or from loops of mempool allocations to the new safe batch
mempool allocator.

There might be future avenues for optimization by using high order
folio allocations that match the file systems preferred folio size,
but for that'd probably want a batch folio allocator first, in addition
to deferring it to avoid scope creep.

TODO:

 - what to pass to mempool_alloc bulk (patch)
 - bio_has_crypt_ctx wrapper??

Changes since v2:
 - drop the block split refactoring that was broken
 - add a bio_crypt_ctx() helper
 - add a missing bio_endio in blk_crypto_fallback_bio_prep
 - fix page freeing in the error path of
   __blk_crypto_fallback_encrypt_bio
 - improve a few comment and commit messages

Changes since v1:
 - drop the mempool bulk allocator that was merged upstream
 - keep call bio_crypt_check_alignment for the hardware crypto case
 - rework the way bios are submitted earlier and reorder the series
   a bit to suit this
 - use struct initializers for struct fscrypt_zero_done in
   fscrypt_zeroout_range_inline_crypt
 - use cmpxchg to make the bi_status update in
   blk_crypto_fallback_encrypt_endio safe
 - rename the bio_set matching it's new purpose
 - remove usage of DECLARE_CRYPTO_WAIT()
 - use consistent GFP flags / scope
 - optimize data unit alignment checking
 - update Documentation/block/inline-encryption.rst for the new
   blk_crypto_submit_bio API
 - optimize alignment checking and ensure it still happens for
   hardware encryption
 - reorder the series a bit
 - improve various comments

Diffstat:
 Documentation/block/inline-encryption.rst |    6 
 block/blk-core.c                          |   10 
 block/blk-crypto-fallback.c               |  428 ++++++++++++++----------------
 block/blk-crypto-internal.h               |   30 --
 block/blk-crypto.c                        |   78 +----
 block/blk-merge.c                         |    9 
 fs/buffer.c                               |    3 
 fs/crypto/bio.c                           |   91 +++---
 fs/ext4/page-io.c                         |    3 
 fs/ext4/readpage.c                        |    9 
 fs/f2fs/data.c                            |    4 
 fs/f2fs/file.c                            |    3 
 fs/iomap/direct-io.c                      |    3 
 include/linux/blk-crypto.h                |   32 ++
 14 files changed, 369 insertions(+), 340 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread
* move blk-crypto-fallback to sit above the block layer v4
@ 2026-01-06  7:36 Christoph Hellwig
  2026-01-06  7:36 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
  0 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2026-01-06  7:36 UTC (permalink / raw)
  To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt

Hi all,

in the past we had various discussions that doing the blk-crypto fallback
below the block layer causes all kinds of problems due to very late
splitting and communicating up features.

This series turns that call chain upside down by requiring the caller to
call into blk-crypto using a new submit_bio wrapper instead so that only
hardware encryption bios are passed through the block layer as such.

While doings this I also noticed that the existing blk-crypto-fallback
code does various unprotected memory allocations which this converts to
mempools, or from loops of mempool allocations to the new safe batch
mempool allocator.

There might be future avenues for optimization by using high order
folio allocations that match the file systems preferred folio size,
but for that'd probably want a batch folio allocator first, in addition
to deferring it to avoid scope creep.

Changes since v3:
 - track the number of pages in each encrypted bio explicitly and drop
   nr_segs instead of the slightly more expensive but simpler recalculation
   for each encrypted bio

Changes since v2:
 - drop the block split refactoring that was broken
 - add a bio_crypt_ctx() helper
 - add a missing bio_endio in blk_crypto_fallback_bio_prep
 - fix page freeing in the error path of
   __blk_crypto_fallback_encrypt_bio
 - improve a few comment and commit messages

Changes since v1:
 - drop the mempool bulk allocator that was merged upstream
 - keep call bio_crypt_check_alignment for the hardware crypto case
 - rework the way bios are submitted earlier and reorder the series
   a bit to suit this
 - use struct initializers for struct fscrypt_zero_done in
   fscrypt_zeroout_range_inline_crypt
 - use cmpxchg to make the bi_status update in
   blk_crypto_fallback_encrypt_endio safe
 - rename the bio_set matching it's new purpose
 - remove usage of DECLARE_CRYPTO_WAIT()
 - use consistent GFP flags / scope
 - optimize data unit alignment checking
 - update Documentation/block/inline-encryption.rst for the new
   blk_crypto_submit_bio API
 - optimize alignment checking and ensure it still happens for
   hardware encryption
 - reorder the series a bit
 - improve various comments

Diffstat:
 Documentation/block/inline-encryption.rst |    6 
 block/blk-core.c                          |   10 
 block/blk-crypto-fallback.c               |  447 +++++++++++++++---------------
 block/blk-crypto-internal.h               |   30 --
 block/blk-crypto.c                        |   78 +----
 block/blk-merge.c                         |    9 
 fs/buffer.c                               |    3 
 fs/crypto/bio.c                           |   91 +++---
 fs/ext4/page-io.c                         |    3 
 fs/ext4/readpage.c                        |    9 
 fs/f2fs/data.c                            |    4 
 fs/f2fs/file.c                            |    3 
 fs/iomap/direct-io.c                      |    3 
 include/linux/blk-crypto.h                |   32 ++
 14 files changed, 386 insertions(+), 342 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread
* move blk-crypto-fallback to sit above the block layer v2
@ 2025-12-10 15:23 Christoph Hellwig
  2025-12-10 15:23 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
  0 siblings, 1 reply; 24+ messages in thread
From: Christoph Hellwig @ 2025-12-10 15:23 UTC (permalink / raw)
  To: Jens Axboe, Eric Biggers; +Cc: linux-block, linux-fsdevel, linux-fscrypt

Hi all,

in the past we had various discussions that doing the blk-crypto fallback
below the block layer causes all kinds of problems due to very late
splitting and communicating up features.

This series turns that call chain upside down by requiring the caller to
call into blk-crypto using a new submit_bio wrapper instead so that only
hardware encryption bios are passed through the block layer as such.

While doings this I also noticed that the existing blk-crypto-fallback
code does various unprotected memory allocations which this converts to
mempools, or from loops of mempool allocations to the new safe batch
mempool allocator.

There might be future avenues for optimization by using high order
folio allocations that match the file systems preferred folio size,
but for that'd probably want a batch folio allocator first, in addition
to deferring it to avoid scope creep.

Changes since v1: 
 - drop the mempool bulk allocator that was merged upstream
 - keep call bio_crypt_check_alignment for the hardware crypto case
 - rework the way bios are submitted earlier and reorder the series
   a bit to suit this
 - use struct initializers for struct fscrypt_zero_done in
   fscrypt_zeroout_range_inline_crypt
 - use cmpxchg to make the bi_status update in
   blk_crypto_fallback_encrypt_endio safe
 - rename the bio_set matching it's new purpose
 - remove usage of DECLARE_CRYPTO_WAIT()
 - use consistent GFP flags / scope
 - optimize data unit alignment checking
 - update Documentation/block/inline-encryption.rst for the new
   blk_crypto_submit_bio API
 - optimize alignment checking and ensure it still happens for
   hardware encryption
 - reorder the series a bit
 - improve various comments

Diffstat:
 Documentation/block/inline-encryption.rst |    6 
 block/blk-core.c                          |   10 
 block/blk-crypto-fallback.c               |  424 ++++++++++++++----------------
 block/blk-crypto-internal.h               |   30 --
 block/blk-crypto.c                        |   78 +----
 block/blk-map.c                           |    2 
 block/blk-merge.c                         |   19 -
 fs/btrfs/bio.c                            |    2 
 fs/buffer.c                               |    3 
 fs/crypto/bio.c                           |   91 +++---
 fs/ext4/page-io.c                         |    3 
 fs/ext4/readpage.c                        |    9 
 fs/f2fs/data.c                            |    4 
 fs/f2fs/file.c                            |    3 
 fs/iomap/direct-io.c                      |    3 
 fs/iomap/ioend.c                          |    2 
 fs/xfs/xfs_zone_gc.c                      |    2 
 include/linux/bio.h                       |    2 
 include/linux/blk-crypto.h                |   22 +
 include/linux/blkdev.h                    |    7 
 20 files changed, 367 insertions(+), 355 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2026-01-06  7:39 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17  6:06 move blk-crypto-fallback to sit above the block layer v3 Christoph Hellwig
2025-12-17  6:06 ` [PATCH 1/9] fscrypt: pass a real sector_t to fscrypt_zeroout_range_inline_crypt Christoph Hellwig
2025-12-17  6:06 ` [PATCH 2/9] fscrypt: keep multiple bios in flight in fscrypt_zeroout_range_inline_crypt Christoph Hellwig
2025-12-17  6:06 ` [PATCH 3/9] blk-crypto: add a bio_crypt_ctx() helper Christoph Hellwig
2025-12-19 19:50   ` Eric Biggers
2025-12-17  6:06 ` [PATCH 4/9] blk-crypto: submit the encrypted bio in blk_crypto_fallback_bio_prep Christoph Hellwig
2025-12-19 19:50   ` Eric Biggers
2025-12-17  6:06 ` [PATCH 5/9] blk-crypto: optimize bio splitting in blk_crypto_fallback_encrypt_bio Christoph Hellwig
2025-12-19 20:08   ` Eric Biggers
2025-12-22 22:12     ` Christoph Hellwig
2025-12-17  6:06 ` [PATCH 6/9] blk-crypto: use on-stack skcipher requests for fallback en/decryption Christoph Hellwig
2025-12-17  6:06 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
2025-12-19 20:02   ` Eric Biggers
2025-12-19 20:25     ` Eric Biggers
2025-12-22 22:16       ` Christoph Hellwig
2025-12-22 22:18     ` Christoph Hellwig
2026-01-06  7:39       ` Christoph Hellwig
2025-12-17  6:06 ` [PATCH 8/9] blk-crypto: optimize data unit alignment checking Christoph Hellwig
2025-12-19 20:14   ` Eric Biggers
2025-12-17  6:06 ` [PATCH 9/9] blk-crypto: handle the fallback above the block layer Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2026-01-06  7:36 move blk-crypto-fallback to sit above the block layer v4 Christoph Hellwig
2026-01-06  7:36 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
2025-12-10 15:23 move blk-crypto-fallback to sit above the block layer v2 Christoph Hellwig
2025-12-10 15:23 ` [PATCH 7/9] blk-crypto: use mempool_alloc_bulk for encrypted bio page allocation Christoph Hellwig
2025-12-13  1:21   ` Eric Biggers
2025-12-15  6:01     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).