* [PATCH] mm/page-writeback: drop usage of folio_index
@ 2025-08-15 12:12 Kairui Song
2025-08-15 12:24 ` Matthew Wilcox
0 siblings, 1 reply; 4+ messages in thread
From: Kairui Song @ 2025-08-15 12:12 UTC (permalink / raw)
To: linux-mm; +Cc: Andrew Morton, Matthew Wilcox, linux-kernel, Kairui Song
From: Kairui Song <kasong@tencent.com>
folio_index is only needed for mixed usage of page cache and swap cache.
The remaining three caller in page-writeback are for page cache tag
marking. Swap cache space doesn't use tag (explicitly sets
mapping_set_no_writeback_tags), so use folio->index here directly.
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/page-writeback.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 3e248d1c3969..30c06889425f 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2739,8 +2739,8 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping,
if (folio->mapping) { /* Race with truncate? */
WARN_ON_ONCE(warn && !folio_test_uptodate(folio));
folio_account_dirtied(folio, mapping);
- __xa_set_mark(&mapping->i_pages, folio_index(folio),
- PAGECACHE_TAG_DIRTY);
+ __xa_set_mark(&mapping->i_pages, folio->index,
+ PAGECACHE_TAG_DIRTY);
}
xa_unlock_irqrestore(&mapping->i_pages, flags);
}
@@ -3019,7 +3019,7 @@ bool __folio_end_writeback(struct folio *folio)
xa_lock_irqsave(&mapping->i_pages, flags);
ret = folio_xor_flags_has_waiters(folio, 1 << PG_writeback);
- __xa_clear_mark(&mapping->i_pages, folio_index(folio),
+ __xa_clear_mark(&mapping->i_pages, folio->index,
PAGECACHE_TAG_WRITEBACK);
if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) {
struct bdi_writeback *wb = inode_to_wb(inode);
@@ -3056,7 +3056,7 @@ void __folio_start_writeback(struct folio *folio, bool keep_write)
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
if (mapping && mapping_use_writeback_tags(mapping)) {
- XA_STATE(xas, &mapping->i_pages, folio_index(folio));
+ XA_STATE(xas, &mapping->i_pages, folio->index);
struct inode *inode = mapping->host;
struct backing_dev_info *bdi = inode_to_bdi(inode);
unsigned long flags;
--
2.50.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/page-writeback: drop usage of folio_index
2025-08-15 12:12 [PATCH] mm/page-writeback: drop usage of folio_index Kairui Song
@ 2025-08-15 12:24 ` Matthew Wilcox
2025-08-15 15:03 ` Kairui Song
0 siblings, 1 reply; 4+ messages in thread
From: Matthew Wilcox @ 2025-08-15 12:24 UTC (permalink / raw)
To: Kairui Song; +Cc: linux-mm, Andrew Morton, linux-kernel
On Fri, Aug 15, 2025 at 08:12:52PM +0800, Kairui Song wrote:
> +++ b/mm/page-writeback.c
> @@ -2739,8 +2739,8 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping,
> if (folio->mapping) { /* Race with truncate? */
> WARN_ON_ONCE(warn && !folio_test_uptodate(folio));
> folio_account_dirtied(folio, mapping);
> - __xa_set_mark(&mapping->i_pages, folio_index(folio),
> - PAGECACHE_TAG_DIRTY);
> + __xa_set_mark(&mapping->i_pages, folio->index,
> + PAGECACHE_TAG_DIRTY);
> }
> xa_unlock_irqrestore(&mapping->i_pages, flags);
> }
What about a shmem folio that's been moved to the swap cache? I used
folio_index() here because I couldn't prove to my satisfaction that this
couldn't happen.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/page-writeback: drop usage of folio_index
2025-08-15 12:24 ` Matthew Wilcox
@ 2025-08-15 15:03 ` Kairui Song
2025-08-15 15:21 ` Matthew Wilcox
0 siblings, 1 reply; 4+ messages in thread
From: Kairui Song @ 2025-08-15 15:03 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: linux-mm, Andrew Morton, linux-kernel
On Fri, Aug 15, 2025 at 9:48 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Fri, Aug 15, 2025 at 08:12:52PM +0800, Kairui Song wrote:
> > +++ b/mm/page-writeback.c
> > @@ -2739,8 +2739,8 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping,
> > if (folio->mapping) { /* Race with truncate? */
> > WARN_ON_ONCE(warn && !folio_test_uptodate(folio));
> > folio_account_dirtied(folio, mapping);
> > - __xa_set_mark(&mapping->i_pages, folio_index(folio),
> > - PAGECACHE_TAG_DIRTY);
> > + __xa_set_mark(&mapping->i_pages, folio->index,
> > + PAGECACHE_TAG_DIRTY);
> > }
> > xa_unlock_irqrestore(&mapping->i_pages, flags);
> > }
>
> What about a shmem folio that's been moved to the swap cache? I used
> folio_index() here because I couldn't prove to my satisfaction that this
> couldn't happen.
I just checked all callers of __folio_mark_dirty:
- block_dirty_folio
__folio_mark_dirty
- filemap_dirty_folio
__folio_mark_dirty
For these two, all their caller are from other fs not related to
shmem/swap cache)
- mark_buffer_dirty
__folio_mark_dirty (mapping is folio->mapping)
- folio_redirty_for_writepage
filemap_dirty_folio
__folio_mark_dirty (mapping is folio->mapping)
For these two, __folio_mark_dirty is called with folio->mapping, and
swap cache space is never set to folio->mapping. If the folio is a
swap cache here, folio_index returns its swap cache index, which is
not equal to its index in shmem or any other map, things will go very
wrong.
And, currently both shmem / swap cache uses noop_dirty_folio, so they
should never call into the helper here.
I think I can add below sanity check here, just to clarify things and
for debugging:
/*
* Shmem writeback relies on swap, and swap writeback
* is LRU based, not using the dirty mark.
*/
VM_WARN_ON(shmem_mapping(mapping) || folio_test_swapcache(folio))
And maybe we can also have a VM_WARN_ON for `folio->mapping != mapping` here?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/page-writeback: drop usage of folio_index
2025-08-15 15:03 ` Kairui Song
@ 2025-08-15 15:21 ` Matthew Wilcox
0 siblings, 0 replies; 4+ messages in thread
From: Matthew Wilcox @ 2025-08-15 15:21 UTC (permalink / raw)
To: Kairui Song; +Cc: linux-mm, Andrew Morton, linux-kernel
On Fri, Aug 15, 2025 at 11:03:56PM +0800, Kairui Song wrote:
> On Fri, Aug 15, 2025 at 9:48 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Fri, Aug 15, 2025 at 08:12:52PM +0800, Kairui Song wrote:
> > > +++ b/mm/page-writeback.c
> > > @@ -2739,8 +2739,8 @@ void __folio_mark_dirty(struct folio *folio, struct address_space *mapping,
> > > if (folio->mapping) { /* Race with truncate? */
> > > WARN_ON_ONCE(warn && !folio_test_uptodate(folio));
> > > folio_account_dirtied(folio, mapping);
> > > - __xa_set_mark(&mapping->i_pages, folio_index(folio),
> > > - PAGECACHE_TAG_DIRTY);
> > > + __xa_set_mark(&mapping->i_pages, folio->index,
> > > + PAGECACHE_TAG_DIRTY);
> > > }
> > > xa_unlock_irqrestore(&mapping->i_pages, flags);
> > > }
> >
> > What about a shmem folio that's been moved to the swap cache? I used
> > folio_index() here because I couldn't prove to my satisfaction that this
> > couldn't happen.
>
> I just checked all callers of __folio_mark_dirty:
>
> - block_dirty_folio
> __folio_mark_dirty
>
> - filemap_dirty_folio
> __folio_mark_dirty
>
> For these two, all their caller are from other fs not related to
> shmem/swap cache)
>
> - mark_buffer_dirty
> __folio_mark_dirty (mapping is folio->mapping)
>
> - folio_redirty_for_writepage
> filemap_dirty_folio
> __folio_mark_dirty (mapping is folio->mapping)
>
> For these two, __folio_mark_dirty is called with folio->mapping, and
> swap cache space is never set to folio->mapping. If the folio is a
> swap cache here, folio_index returns its swap cache index, which is
> not equal to its index in shmem or any other map, things will go very
> wrong.
>
> And, currently both shmem / swap cache uses noop_dirty_folio, so they
> should never call into the helper here.
Yes, we've made quite a few changes around here and maybe it can't
happen now.
> I think I can add below sanity check here, just to clarify things and
> for debugging:
>
> /*
> * Shmem writeback relies on swap, and swap writeback
> * is LRU based, not using the dirty mark.
> */
> VM_WARN_ON(shmem_mapping(mapping) || folio_test_swapcache(folio))
That might be a good idea.
> And maybe we can also have a VM_WARN_ON for `folio->mapping != mapping` here?
I don't think that will work. We can definitely see folio->mapping ==
NULL as the zap_page_range() path blocks folio freeing with the page
table lock rather than by taking the folio lock. So truncation can
start but not complete (as it will wait on the PTL for mapped folios).
I think it's always true that folio->mapping will be either NULL or
equal to mapping, but maybe there's another case I've forgotten about.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-08-15 15:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-15 12:12 [PATCH] mm/page-writeback: drop usage of folio_index Kairui Song
2025-08-15 12:24 ` Matthew Wilcox
2025-08-15 15:03 ` Kairui Song
2025-08-15 15:21 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).