public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
@ 2026-02-20 12:59 Nanzhe Zhao
  2026-02-20 15:48 ` Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Nanzhe Zhao @ 2026-02-20 12:59 UTC (permalink / raw)
  To: lsf-pc
  Cc: linux-fsdevel, Christoph Hellwig, willy, yi.zhang, jaegeuk,
	Chao Yu, Barry Song, wqu

Large folios can reduce per-page overhead and improve throughput for large buffered I/O, but enabling them in filesystems is not a mechanical “page → folio” conversion. The core difficulty is preserving correctness and performance when a folio must maintain subrange state, while existing filesystem code paths and the iomap buffered I/O framework make different assumptions about state tracking, locking lifetime, block mapping, and writeback semantics.

This session proposes a cross-filesystem discussion around two directions that are actively being explored:

Iomap approach: adopt iomap buffered I/O paths and benefit from iomap-style subrange folio state machinery. However, much of this machinery lives as static helpers inside iomap’s implementation (e.g., in buffered-io.c) and is not available as a reusable API, which pushes filesystems toward re-implementing similar logic. Moreover, iomap’s per-folio state relies on folio-private metadata storage, which can clash with filesystem-specific folio-private usage.


Native fs approach: keep native buffered I/O paths and implement filesystem-specific folio_state tracking and helpers to avoid whole-folio dirtying/write amplification and to match filesystem-private metadata (e.g., private flags). This avoids some iomap integration constraints and preserves filesystem-specific optimizations, but it increases filesystem-local complexity and long-term maintenance cost.


Using f2fs as a concrete instance (log-structured, indirect-pointer mapping, private folio flags), this session consolidates two recurring issues relevant across filesystems:

Per-folio state tracking: iomap subrange-state API exposure vs filesystem-local solution.
COW writeback support: minimal iomap extensions vs filesystem-local writeback for COW paths.

The goal is to converge on recommended design patterns and actionable next steps for f2fs/ext4/btrfs/others to enable large folios without correctness risks or performance regressions.

Best regards,
Nanzhe Zhao

Related Patches for Large Folios:

f2fs:
- https://lore.kernel.org/all/20250813092131.44762-1-nzzhao@126.com/
- https://lore.kernel.org/linux-f2fs-devel/20251120235446.1947532-1-jaegeuk@kernel.org/
- https://lore.kernel.org/linux-f2fs-devel/20260203091256.854842-1-nzzhao@126.com/

ext4:
- https://lore.kernel.org/all/20250512063319.3539411-1-yi.zhang@huaweicloud.com/

btrfs:
- https://lore.kernel.org/all/676154e5415d8d15499fb8c02b0eabbb1c6cef26.1745403878.git.wqu@suse.com/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-20 12:59 [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations Nanzhe Zhao
@ 2026-02-20 15:48 ` Christoph Hellwig
  2026-02-20 18:40   ` Matthew Wilcox
  2026-02-20 17:07 ` [Lsf-pc] " Jan Kara
  2026-02-23  8:34 ` Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations") Qu Wenruo
  2 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2026-02-20 15:48 UTC (permalink / raw)
  To: Nanzhe Zhao
  Cc: lsf-pc, linux-fsdevel, Christoph Hellwig, willy, yi.zhang,
	jaegeuk, Chao Yu, Barry Song, wqu

Maybe you catch on the wrong foot, but this pisses me off.  I've been
telling you guys to please actually fricking try converting f2fs to
iomap, and it's been constantly ignored.

And arguments about "log structured filesystems" here are BS.  iomap
has supported out of places writes for XFS since 2016, which is
right after iomap.c was created, and long before the non-buffer_head
buffered I/O path exist.  With zoned XFS we have a user that writes
data purely log structures, and using zone append in a way where we
don't even know the location at submission time.  So no, that's not
an argument.  f2fs being really weird is one, but we've usually been
trying to accommodate it, but for that everyone actually needs to
understand what it's trying to do.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-20 12:59 [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations Nanzhe Zhao
  2026-02-20 15:48 ` Christoph Hellwig
@ 2026-02-20 17:07 ` Jan Kara
  2026-02-23  8:34 ` Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations") Qu Wenruo
  2 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2026-02-20 17:07 UTC (permalink / raw)
  To: Nanzhe Zhao
  Cc: lsf-pc, linux-fsdevel, Christoph Hellwig, willy, yi.zhang,
	jaegeuk, Chao Yu, Barry Song, wqu

On Fri 20-02-26 20:59:38, Nanzhe Zhao wrote:
> The goal is to converge on recommended design patterns and actionable
> next steps for f2fs/ext4/btrfs/others to enable large folios without
> correctness risks or performance regressions.

Just for record: Ext4 does currently support large folios for buffered IO
(currently using buffer heads). We are in the process of converting the
buffered IO paths to iomap (Zhang Yi is working on that) and although they
do face some challenges with lock ordering (mostly due to how jbd2
journalling is done) they seem solvable. So at this point I don't think
ext4 needs any changes from iomap side to be able to use it.

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-20 15:48 ` Christoph Hellwig
@ 2026-02-20 18:40   ` Matthew Wilcox
  2026-02-23 21:36     ` Jaegeuk Kim
  0 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox @ 2026-02-20 18:40 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Nanzhe Zhao, lsf-pc, linux-fsdevel, yi.zhang, jaegeuk, Chao Yu,
	Barry Song, wqu

On Fri, Feb 20, 2026 at 07:48:39AM -0800, Christoph Hellwig wrote:
> Maybe you catch on the wrong foot, but this pisses me off.  I've been
> telling you guys to please actually fricking try converting f2fs to
> iomap, and it's been constantly ignored.

Christoph isn't alone here.  There's a consistent pattern of f2fs going
off and doing weird shit without talking to anyone else.  A good start
would be f2fs maintainers actually coming to LSFMM, but a lot more design
decisions need to be cc'd to linux-fsdevel.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations")
  2026-02-20 12:59 [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations Nanzhe Zhao
  2026-02-20 15:48 ` Christoph Hellwig
  2026-02-20 17:07 ` [Lsf-pc] " Jan Kara
@ 2026-02-23  8:34 ` Qu Wenruo
  2026-02-23 13:06   ` Christoph Hellwig
  2 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2026-02-23  8:34 UTC (permalink / raw)
  To: Nanzhe Zhao, lsf-pc
  Cc: linux-fsdevel, Christoph Hellwig, willy, yi.zhang, jaegeuk,
	Chao Yu, Barry Song



在 2026/2/20 23:29, Nanzhe Zhao 写道:
> Large folios can reduce per-page overhead and improve throughput for large buffered I/O, but enabling them in filesystems is not a mechanical “page → folio” conversion. The core difficulty is preserving correctness and performance when a folio must maintain subrange state, while existing filesystem code paths and the iomap buffered I/O framework make different assumptions about state tracking, locking lifetime, block mapping, and writeback semantics.
> 
> This session proposes a cross-filesystem discussion around two directions that are actively being explored:
> 
> Iomap approach: adopt iomap buffered I/O paths and benefit from iomap-style subrange folio state machinery. However, much of this machinery lives as static helpers inside iomap’s implementation (e.g., in buffered-io.c) and is not available as a reusable API, which pushes filesystems toward re-implementing similar logic. Moreover, iomap’s per-folio state relies on folio-private metadata storage, which can clash with filesystem-specific folio-private usage.
> 
> 
> Native fs approach: keep native buffered I/O paths and implement filesystem-specific folio_state tracking and helpers to avoid whole-folio dirtying/write amplification and to match filesystem-private metadata (e.g., private flags). This avoids some iomap integration constraints and preserves filesystem-specific optimizations, but it increases filesystem-local complexity and long-term maintenance cost.

Please note that, btrfs chose this "native fs" way only because there 
are a lot of blockage preventing us from going full iomap directly.

Our long term objective is to go full iomap, and Goldwyn is already 
working on non-compressed buffered write path.
And I'm working on the compressed write path, firstly to get rid of the 
complex async submission thing, which makes btrfs per-folio tracking way 
more complex than iomap.


So there is no real "native fs" approach, it's just a middle ground 
before we fully figure out how to do our buffered write path correct.

[BTRFS COMPRESSION DILEMMA]

I just want to take the chance to get the feedback from iomap guys on 
how to support compression.

Btrfs supports compression and implemented it in a very flex but also 
very complex way.

For the example of flexibility, any dirty range >= 2 fs blocks can go 
through compression, and there is no alignment requirement other than fs 
block size at all.

And for the example of complexity, btrfs has a complex async extent 
submission at delalloc time (similar to iomap_begin time), where we keep 
the whole contig dirty range (can go beyond the current folio) locked, 
and do the compression in a workqueue, and submit them in that workqueue.

This introduced a lot of extra sub-folio tracking for locked/writeback 
fs blocks, and kills concurrency (can not read the page cache since it's 
locked during compression).


Furthermore there are a lot of different corner cases we need to 
address, e.g:

- Compressed >= input

   We need to fall back to non-compressed path.

- Compression is done, but extent allocator failed

   E.g. we got a 128K data, compressed it to 64K, but our on-disk free
   space is fragmented, can only provide two 32K extents.

   We still need to fallback to non-compressed path, as we do not support
   splitting a compressed extent into two.


[DELAYED UNTIL SUBMISSION]

Although we're far from migrating to iomap, my current compression 
rework will go a delayed method like the following, mostly to get rid of 
the async submission code:

- Allocate a place holder ordered extent at iomap_begin/delalloc time
   Unlike regular iomap_begin() or run_delalloc_range(), we do not
   reserve any on-disk space.

   And the ordered extent will have a special flag to notify that the bio
   should not be written into disk directly.

- Queue the folio into a bio and submit
   So the involved folios will get its dirty flags cleared, and set
   writeback flags just before submission.

   And at submission time, we find the bio has a special delayed flag,
   and will queue the work load into a workqueue to handle the special
   bio.

- Do real work in the workqueue, including:

   * Do the compression

   * Allocate real on-disk extent(s)

   * Assemble the real bio(s)
     If the compression and allocation succeeded, we assmeble
     the bio with compressed data.

     Otherwise fallback to non-compressed path, using the page cache
     to assemble the bio.

   * Submit all involved bio(s) and wait for them to finish

   * Do the endio of the original bio
     Which will clear the writeback flags of all involved page cache
     folios, and end ordered extents for them.

[IOMAP WITH COMPRESSION?]
If we want to apply this method to iomap, it means we will have a new 
iomap type (DELAYED), and let the fs handle most of the work during 
submission, and mostly kill the generification of iomap.

Any ideas on this? Or will there be a better solution?

Thanks,
Qu

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations")
  2026-02-23  8:34 ` Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations") Qu Wenruo
@ 2026-02-23 13:06   ` Christoph Hellwig
  2026-02-23 21:53     ` Qu Wenruo
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Hellwig @ 2026-02-23 13:06 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Nanzhe Zhao, lsf-pc, linux-fsdevel, Christoph Hellwig, willy,
	yi.zhang, jaegeuk, Chao Yu, Barry Song

On Mon, Feb 23, 2026 at 07:04:55PM +1030, Qu Wenruo wrote:
> And for the example of complexity, btrfs has a complex async extent
> submission at delalloc time (similar to iomap_begin time), where we keep the
> whole contig dirty range (can go beyond the current folio) locked, and do
> the compression in a workqueue, and submit them in that workqueue.

I still think btrfs would benefit greatly from killing the async
submission workqueue, and I hope that the multiple writeback thread
work going on currently will help with this.  I think you really should
talk to Kundan.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-20 18:40   ` Matthew Wilcox
@ 2026-02-23 21:36     ` Jaegeuk Kim
  2026-02-26 10:13       ` Barry Song
  0 siblings, 1 reply; 13+ messages in thread
From: Jaegeuk Kim @ 2026-02-23 21:36 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Christoph Hellwig, Nanzhe Zhao, lsf-pc, linux-fsdevel, yi.zhang,
	Chao Yu, Barry Song, wqu

On 02/20, Matthew Wilcox wrote:
> On Fri, Feb 20, 2026 at 07:48:39AM -0800, Christoph Hellwig wrote:
> > Maybe you catch on the wrong foot, but this pisses me off.  I've been
> > telling you guys to please actually fricking try converting f2fs to
> > iomap, and it's been constantly ignored.
> 
> Christoph isn't alone here.  There's a consistent pattern of f2fs going
> off and doing weird shit without talking to anyone else.  A good start
> would be f2fs maintainers actually coming to LSFMM, but a lot more design
> decisions need to be cc'd to linux-fsdevel.

What's the benefit of supporting the large folio on the write path? And,
which other designs are you talking about?

I'm also getting the consistent pattern: 1) posting patches in f2fs for
production, 2) requested to post patches modifying the generic layer, 3)
posting the converted patches after heavy tests, 4) sitting there for
months without progress.

E.g.,
https://lore.kernel.org/lkml/20251202013212.964298-1-jaegeuk@kernel.org/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations")
  2026-02-23 13:06   ` Christoph Hellwig
@ 2026-02-23 21:53     ` Qu Wenruo
  2026-02-24 14:58       ` Christoph Hellwig
  0 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2026-02-23 21:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Nanzhe Zhao, lsf-pc, linux-fsdevel, willy, yi.zhang, jaegeuk,
	Chao Yu, Barry Song



在 2026/2/23 23:36, Christoph Hellwig 写道:
> On Mon, Feb 23, 2026 at 07:04:55PM +1030, Qu Wenruo wrote:
>> And for the example of complexity, btrfs has a complex async extent
>> submission at delalloc time (similar to iomap_begin time), where we keep the
>> whole contig dirty range (can go beyond the current folio) locked, and do
>> the compression in a workqueue, and submit them in that workqueue.
> 
> I still think btrfs would benefit greatly from killing the async
> submission workqueue,

That's for sure. Although without that workqueue, where should the 
compression workload happen?

Even we kill async submission workqueue, we still want to take advantage 
of multi-thread compression, just at a different timing (without locking 
every involved folio).

> and I hope that the multiple writeback thread
> work going on currently will help with this.  I think you really should
> talk to Kundan.

Mind to CC him/her?

Thanks,
Qu

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations")
  2026-02-23 21:53     ` Qu Wenruo
@ 2026-02-24 14:58       ` Christoph Hellwig
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Hellwig @ 2026-02-24 14:58 UTC (permalink / raw)
  To: Qu Wenruo
  Cc: Christoph Hellwig, Nanzhe Zhao, lsf-pc, linux-fsdevel, willy,
	yi.zhang, jaegeuk, Chao Yu, Barry Song, Kundan Kumar

On Tue, Feb 24, 2026 at 08:23:35AM +1030, Qu Wenruo wrote:
> > I still think btrfs would benefit greatly from killing the async
> > submission workqueue,
> 
> That's for sure. Although without that workqueue, where should the
> compression workload happen?

In the additional writeback threads mentioned below.

> > and I hope that the multiple writeback thread
> > work going on currently will help with this.  I think you really should
> > talk to Kundan.
> 
> Mind to CC him/her?

Added.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-23 21:36     ` Jaegeuk Kim
@ 2026-02-26 10:13       ` Barry Song
  2026-02-27  2:02         ` Jaegeuk Kim
  0 siblings, 1 reply; 13+ messages in thread
From: Barry Song @ 2026-02-26 10:13 UTC (permalink / raw)
  To: Jaegeuk Kim
  Cc: Matthew Wilcox, Christoph Hellwig, Nanzhe Zhao, lsf-pc,
	linux-fsdevel, yi.zhang, Chao Yu, wqu

On Tue, Feb 24, 2026 at 5:36 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
>
> On 02/20, Matthew Wilcox wrote:
> > On Fri, Feb 20, 2026 at 07:48:39AM -0800, Christoph Hellwig wrote:
> > > Maybe you catch on the wrong foot, but this pisses me off.  I've been
> > > telling you guys to please actually fricking try converting f2fs to
> > > iomap, and it's been constantly ignored.
> >
> > Christoph isn't alone here.  There's a consistent pattern of f2fs going
> > off and doing weird shit without talking to anyone else.  A good start
> > would be f2fs maintainers actually coming to LSFMM, but a lot more design
> > decisions need to be cc'd to linux-fsdevel.
>
> What's the benefit of supporting the large folio on the write path? And,
> which other designs are you talking about?
>
> I'm also getting the consistent pattern: 1) posting patches in f2fs for
> production, 2) requested to post patches modifying the generic layer, 3)
> posting the converted patches after heavy tests, 4) sitting there for
> months without progress.

It can sometimes be a bit tricky for the common layer and
filesystem-specific layers to coordinate smoothly. At times,
it can be somewhat frustrating.

Privately, I know how tough it was for Nanzhe to decide whether
to make changes in the iomap layer or in filesystem-specific code.
Nevertheless, he has the dedication and care to implement F2FS
large folio support in the best possible way, as he has discussed
with me many times in private.

I strongly suggest that LSF/MM/BPF invite Kim (and Chao, if possible)
along with the iomap team to discuss this together—at least
remotely if not everyone can attend in person.

>
> E.g.,
> https://lore.kernel.org/lkml/20251202013212.964298-1-jaegeuk@kernel.org/

Thanks
Barry

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-26 10:13       ` Barry Song
@ 2026-02-27  2:02         ` Jaegeuk Kim
  2026-02-27  2:43           ` Barry Song
  0 siblings, 1 reply; 13+ messages in thread
From: Jaegeuk Kim @ 2026-02-27  2:02 UTC (permalink / raw)
  To: Barry Song
  Cc: Matthew Wilcox, Christoph Hellwig, Nanzhe Zhao, lsf-pc,
	linux-fsdevel, yi.zhang, Chao Yu, wqu

On 02/26, Barry Song wrote:
> On Tue, Feb 24, 2026 at 5:36 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
> >
> > On 02/20, Matthew Wilcox wrote:
> > > On Fri, Feb 20, 2026 at 07:48:39AM -0800, Christoph Hellwig wrote:
> > > > Maybe you catch on the wrong foot, but this pisses me off.  I've been
> > > > telling you guys to please actually fricking try converting f2fs to
> > > > iomap, and it's been constantly ignored.
> > >
> > > Christoph isn't alone here.  There's a consistent pattern of f2fs going
> > > off and doing weird shit without talking to anyone else.  A good start
> > > would be f2fs maintainers actually coming to LSFMM, but a lot more design
> > > decisions need to be cc'd to linux-fsdevel.
> >
> > What's the benefit of supporting the large folio on the write path? And,
> > which other designs are you talking about?
> >
> > I'm also getting the consistent pattern: 1) posting patches in f2fs for
> > production, 2) requested to post patches modifying the generic layer, 3)
> > posting the converted patches after heavy tests, 4) sitting there for
> > months without progress.
> 
> It can sometimes be a bit tricky for the common layer and
> filesystem-specific layers to coordinate smoothly. At times,
> it can be somewhat frustrating.
> 
> Privately, I know how tough it was for Nanzhe to decide whether
> to make changes in the iomap layer or in filesystem-specific code.
> Nevertheless, he has the dedication and care to implement F2FS
> large folio support in the best possible way, as he has discussed
> with me many times in private.
> 
> I strongly suggest that LSF/MM/BPF invite Kim (and Chao, if possible)
> along with the iomap team to discuss this together—at least
> remotely if not everyone can attend in person.

We don't have a plan to attend this year summit. But I'm open to have an offline
call to discuss about what we can do in f2fs, if you guys are interested in.
Let me know.

> 
> >
> > E.g.,
> > https://lore.kernel.org/lkml/20251202013212.964298-1-jaegeuk@kernel.org/
> 
> Thanks
> Barry

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-27  2:02         ` Jaegeuk Kim
@ 2026-02-27  2:43           ` Barry Song
  2026-02-27 19:25             ` Jaegeuk Kim
  0 siblings, 1 reply; 13+ messages in thread
From: Barry Song @ 2026-02-27  2:43 UTC (permalink / raw)
  To: Jaegeuk Kim
  Cc: Matthew Wilcox, Christoph Hellwig, Nanzhe Zhao, lsf-pc,
	linux-fsdevel, yi.zhang, Chao Yu, wqu

On Fri, Feb 27, 2026 at 10:02 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
>
> On 02/26, Barry Song wrote:
> > On Tue, Feb 24, 2026 at 5:36 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
> > >
> > > On 02/20, Matthew Wilcox wrote:
> > > > On Fri, Feb 20, 2026 at 07:48:39AM -0800, Christoph Hellwig wrote:
> > > > > Maybe you catch on the wrong foot, but this pisses me off.  I've been
> > > > > telling you guys to please actually fricking try converting f2fs to
> > > > > iomap, and it's been constantly ignored.
> > > >
> > > > Christoph isn't alone here.  There's a consistent pattern of f2fs going
> > > > off and doing weird shit without talking to anyone else.  A good start
> > > > would be f2fs maintainers actually coming to LSFMM, but a lot more design
> > > > decisions need to be cc'd to linux-fsdevel.
> > >
> > > What's the benefit of supporting the large folio on the write path? And,
> > > which other designs are you talking about?
> > >
> > > I'm also getting the consistent pattern: 1) posting patches in f2fs for
> > > production, 2) requested to post patches modifying the generic layer, 3)
> > > posting the converted patches after heavy tests, 4) sitting there for
> > > months without progress.
> >
> > It can sometimes be a bit tricky for the common layer and
> > filesystem-specific layers to coordinate smoothly. At times,
> > it can be somewhat frustrating.
> >
> > Privately, I know how tough it was for Nanzhe to decide whether
> > to make changes in the iomap layer or in filesystem-specific code.
> > Nevertheless, he has the dedication and care to implement F2FS
> > large folio support in the best possible way, as he has discussed
> > with me many times in private.
> >
> > I strongly suggest that LSF/MM/BPF invite Kim (and Chao, if possible)
> > along with the iomap team to discuss this together—at least
> > remotely if not everyone can attend in person.
>
> We don't have a plan to attend this year summit. But I'm open to have an offline

It’s truly a shame, but I understand that you have prior commitments.

> call to discuss about what we can do in f2fs, if you guys are interested in.
> Let me know.

Many thanks for your willingness to have an offline call.

Absolutely, I’m very interested. I spoke with Nanzhe today, and he’ll
prepare documents and code to review with you, gather your feedback,
and incorporate all your guidance.

Nanzhe can then bring all the points to LSF afterward
if the topic is scheduled.

> > >
> > > E.g.,
> > > https://lore.kernel.org/lkml/20251202013212.964298-1-jaegeuk@kernel.org/
> >

Thanks
Barry

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations
  2026-02-27  2:43           ` Barry Song
@ 2026-02-27 19:25             ` Jaegeuk Kim
  0 siblings, 0 replies; 13+ messages in thread
From: Jaegeuk Kim @ 2026-02-27 19:25 UTC (permalink / raw)
  To: Barry Song
  Cc: Matthew Wilcox, Christoph Hellwig, Nanzhe Zhao, lsf-pc,
	linux-fsdevel, yi.zhang, Chao Yu, wqu

On 02/27, Barry Song wrote:
> On Fri, Feb 27, 2026 at 10:02 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
> >
> > On 02/26, Barry Song wrote:
> > > On Tue, Feb 24, 2026 at 5:36 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
> > > >
> > > > On 02/20, Matthew Wilcox wrote:
> > > > > On Fri, Feb 20, 2026 at 07:48:39AM -0800, Christoph Hellwig wrote:
> > > > > > Maybe you catch on the wrong foot, but this pisses me off.  I've been
> > > > > > telling you guys to please actually fricking try converting f2fs to
> > > > > > iomap, and it's been constantly ignored.
> > > > >
> > > > > Christoph isn't alone here.  There's a consistent pattern of f2fs going
> > > > > off and doing weird shit without talking to anyone else.  A good start
> > > > > would be f2fs maintainers actually coming to LSFMM, but a lot more design
> > > > > decisions need to be cc'd to linux-fsdevel.
> > > >
> > > > What's the benefit of supporting the large folio on the write path? And,
> > > > which other designs are you talking about?
> > > >
> > > > I'm also getting the consistent pattern: 1) posting patches in f2fs for
> > > > production, 2) requested to post patches modifying the generic layer, 3)
> > > > posting the converted patches after heavy tests, 4) sitting there for
> > > > months without progress.
> > >
> > > It can sometimes be a bit tricky for the common layer and
> > > filesystem-specific layers to coordinate smoothly. At times,
> > > it can be somewhat frustrating.
> > >
> > > Privately, I know how tough it was for Nanzhe to decide whether
> > > to make changes in the iomap layer or in filesystem-specific code.
> > > Nevertheless, he has the dedication and care to implement F2FS
> > > large folio support in the best possible way, as he has discussed
> > > with me many times in private.
> > >
> > > I strongly suggest that LSF/MM/BPF invite Kim (and Chao, if possible)
> > > along with the iomap team to discuss this together—at least
> > > remotely if not everyone can attend in person.
> >
> > We don't have a plan to attend this year summit. But I'm open to have an offline
> 
> It’s truly a shame, but I understand that you have prior commitments.
> 
> > call to discuss about what we can do in f2fs, if you guys are interested in.
> > Let me know.
> 
> Many thanks for your willingness to have an offline call.
> 
> Absolutely, I’m very interested. I spoke with Nanzhe today, and he’ll
> prepare documents and code to review with you, gather your feedback,
> and incorporate all your guidance.

Thanks. Let's talk in a separate thread.

> 
> Nanzhe can then bring all the points to LSF afterward
> if the topic is scheduled.
> 
> > > >
> > > > E.g.,
> > > > https://lore.kernel.org/lkml/20251202013212.964298-1-jaegeuk@kernel.org/
> > >
> 
> Thanks
> Barry

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-02-27 19:25 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-20 12:59 [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations Nanzhe Zhao
2026-02-20 15:48 ` Christoph Hellwig
2026-02-20 18:40   ` Matthew Wilcox
2026-02-23 21:36     ` Jaegeuk Kim
2026-02-26 10:13       ` Barry Song
2026-02-27  2:02         ` Jaegeuk Kim
2026-02-27  2:43           ` Barry Song
2026-02-27 19:25             ` Jaegeuk Kim
2026-02-20 17:07 ` [Lsf-pc] " Jan Kara
2026-02-23  8:34 ` Iomap and compression? (Was "Re: [LSF/MM/BPF TOPIC] Large folio support: iomap framework changes versus filesystem-specific implementations") Qu Wenruo
2026-02-23 13:06   ` Christoph Hellwig
2026-02-23 21:53     ` Qu Wenruo
2026-02-24 14:58       ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox