From: "Darrick J. Wong" <djwong@kernel.org>
To: Namjae Jeon <linkinjeon@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>,
hyc.lee@gmail.com, linux-fsdevel@vger.kernel.org,
Christian Brauner <brauner@kernel.org>,
linux-xfs@vger.kernel.org, Gao Xiang <xiang@kernel.org>
Subject: Re: [PATCH] ntfs: use page allocation for resident attribute inline data
Date: Wed, 22 Apr 2026 08:28:32 -0700 [thread overview]
Message-ID: <20260422152832.GR7727@frogsfrogsfrogs> (raw)
In-Reply-To: <CAKYAXd-RQrrpXH0R_Y87r53h0-KCDGVOtm5RaqTGETMEq-LYpg@mail.gmail.com>
On Wed, Apr 22, 2026 at 11:35:32PM +0900, Namjae Jeon wrote:
> On Wed, Apr 22, 2026 at 9:55 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Wed, Apr 22, 2026 at 07:46:27PM +0900, Namjae Jeon wrote:
> > > The current kmemdup() based allocation for IOMAP_INLINE can result in
> > > inline_data pointer having a non-zero page offset. This causes
> > > iomap_inline_data_valid() to fail the check:
> > >
> > > iomap->length <= PAGE_SIZE - offset_in_page(iomap->inline_data)
> > >
> > > and triggers the kernel BUG at fs/iomap/buffered-io.c:1061.
> >
> > Hang on, hang on, hang on.
> >
> > First, maybe this check is too strict. I'm sure it's true for EROFS,
> > but I don't see why it should be true for everybody. Perhaps we should
> > delete this check or relax it?
> I agree that the current check might be unnecessarily strict for
> general cases. So I will prepare another patch to remove this trap for
> further discussion with iomap maintainers.
> >
> > Second, why are you calling kmemdup() to begin with? This seems
> > entirely pointless; the iomap code is going to call memcpy() on it.
> > You're supposed to just be pointing into your data structures.
> In the initial implementation of NTFS with iomap, I pointed directly
> to the internal data structures. However, I encountered this BUG_ON
> trap during testing, so I switched to page allocation to avoid it.
> Then, during the review process for the NTFS series, I changed it to
> kmemdup() without much thought. If this BUG_ON trap can be removed, I
> could have simply pointed to the internal data structures as you said.
I think the check is wrong. We rely on the filesystem to point
iomap::inline_data to kernel memory that is at least iomap::length bytes
in size. If that crosses a PAGE_SIZE boundary that's fine, so long as
the caller actually mapped that much memory. IOWs, if you have an
iomap:
{pos = 0, inline_data = 0xB0000, length = 32768, ...}
then we trust that you really did map all of the MDA text mode memory
and that memcpy'ing 100 bytes to pos 4090 is ok.
(Perhaps this is a relic of the bs<=ps days?)
--D
next prev parent reply other threads:[~2026-04-22 15:28 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20260422104627.8193-1-linkinjeon@kernel.org>
2026-04-22 12:55 ` [PATCH] ntfs: use page allocation for resident attribute inline data Matthew Wilcox
2026-04-22 14:35 ` Namjae Jeon
2026-04-22 15:28 ` Darrick J. Wong [this message]
2026-04-22 15:36 ` Gao Xiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260422152832.GR7727@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=brauner@kernel.org \
--cc=hyc.lee@gmail.com \
--cc=linkinjeon@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox