From: Keith Busch <kbusch@kernel.org>
To: Mike Snitzer <snitzer@kernel.org>
Cc: Keith Busch <kbusch@meta.com>,
dm-devel@lists.linux.dev, linux-block@vger.kernel.org,
mpatocka@redhat.com, ebiggers@google.com
Subject: Re: [RFC PATCH] dm-crypt: allow unaligned bio_vecs for direct io
Date: Thu, 18 Sep 2025 14:52:21 -0600 [thread overview]
Message-ID: <aMxxBWQAL8ws8pGa@kbusch-mbp> (raw)
In-Reply-To: <aMxrIjcFqaT2WztN@kernel.org>
On Thu, Sep 18, 2025 at 04:27:14PM -0400, Mike Snitzer wrote:
> On Thu, Sep 18, 2025 at 09:16:42AM -0700, Keith Busch wrote:
> > From: Keith Busch <kbusch@kernel.org>
> >
> > Most storage devices can handle DMA for data that is not aligned to the
> > sector block size. The block and filesystem layers have introduced
> > updates to allow that kind of memory alignment flexibility when
> > possible.
>
> I'd love to understand what changes in filesystems you're referring
> to. Because I know for certain that DIO with memory that isn't
> 'dma_alignment' aligned fails with certainty ontop of XFS.
I only mentioned the "sector block size" alignment, not the hardware dma
alignment. The dma alignment remains the minimum address alignment you
have to use. But xfs has been able to handle dword aligned addresses for
a while now, assuming your block_device can handle dword aligned DMA.
But the old requirement for a buffer to be aligned to a 4k address
offset for a 4k block device isn't necessary anymore.
> Pretty certain it balks at DIO that isn't logical_block_size aligned
> ondisk too.
We might be talking about different things. The total length of a
vector must be a multiple of the logical block size, yes. But I'm
talking about the address offset. Right now dm-crypt can't handle a
request if the address offset is not aligned to the logical block size.
But that's a purely software created limitation, there's no hard reason
that needs to be the case.
> > it sends a single scatterlist element for the input ot the encrypt and
> > decrypt algorithms. This forces applications that have unaligned data to
> > copy through a bounce buffer, increasing CPU and memory utilization.
>
> Even this notion that an application is somehow able to (unwittingly)
> lean on "unaligned data to copy through a bounce buffer" -- has me
> asking: where does Keith get these wonderful toys?
I'm just trying to write data to disk from buffers filled by NICs
subscribed to io_uring zero-copy receive capabilities. I guess they need
fancy features to do that, but it's not that uncommon, is it? Anyway,
the data that needs to be persisted often have offsets that are still
DMA friendly, but unlikely to be perfectly page aligned.
prev parent reply other threads:[~2025-09-18 20:52 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 16:16 [RFC PATCH] dm-crypt: allow unaligned bio_vecs for direct io Keith Busch
2025-09-18 20:13 ` Keith Busch
2025-09-26 14:19 ` Mikulas Patocka
2025-09-26 16:17 ` Keith Busch
2025-09-18 20:27 ` Mike Snitzer
2025-09-18 20:52 ` Keith Busch [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aMxxBWQAL8ws8pGa@kbusch-mbp \
--to=kbusch@kernel.org \
--cc=dm-devel@lists.linux.dev \
--cc=ebiggers@google.com \
--cc=kbusch@meta.com \
--cc=linux-block@vger.kernel.org \
--cc=mpatocka@redhat.com \
--cc=snitzer@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox