From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-arch@vger.kernel.org, torvalds@linux-foundation.org,
npiggin@gmail.com
Subject: [PATCH v2 16/17] mm: Make __end_folio_writeback() return void
Date: Wed, 4 Oct 2023 17:53:16 +0100 [thread overview]
Message-ID: <20231004165317.1061855-17-willy@infradead.org> (raw)
In-Reply-To: <20231004165317.1061855-1-willy@infradead.org>
Rather than check the result of test-and-clear, just check that we have
the writeback bit set at the start. This wouldn't catch every case, but
it's good enough (and enables the next patch).
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/filemap.c | 9 +++++++--
mm/internal.h | 2 +-
mm/page-writeback.c | 38 ++++++++++++++++----------------------
3 files changed, 24 insertions(+), 25 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 3dad2615af41..ddcced4638b5 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1595,9 +1595,15 @@ EXPORT_SYMBOL(folio_wait_private_2_killable);
/**
* folio_end_writeback - End writeback against a folio.
* @folio: The folio.
+ *
+ * The folio must actually be under writeback.
+ *
+ * Context: May be called from process or interrupt context.
*/
void folio_end_writeback(struct folio *folio)
{
+ VM_BUG_ON_FOLIO(!folio_test_writeback(folio), folio);
+
/*
* folio_test_clear_reclaim() could be used here but it is an
* atomic operation and overkill in this particular case. Failing
@@ -1617,8 +1623,7 @@ void folio_end_writeback(struct folio *folio)
* reused before the folio_wake().
*/
folio_get(folio);
- if (!__folio_end_writeback(folio))
- BUG();
+ __folio_end_writeback(folio);
smp_mb__after_atomic();
folio_wake(folio, PG_writeback);
diff --git a/mm/internal.h b/mm/internal.h
index 30cf724ddbce..ccb08dd9b5ec 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -105,7 +105,7 @@ static inline void wake_throttle_isolated(pg_data_t *pgdat)
vm_fault_t do_swap_page(struct vm_fault *vmf);
void folio_rotate_reclaimable(struct folio *folio);
-bool __folio_end_writeback(struct folio *folio);
+void __folio_end_writeback(struct folio *folio);
void deactivate_file_folio(struct folio *folio);
void folio_activate(struct folio *folio);
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index b8d3d7040a50..410b53e888e3 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2940,11 +2940,10 @@ static void wb_inode_writeback_end(struct bdi_writeback *wb)
spin_unlock_irqrestore(&wb->work_lock, flags);
}
-bool __folio_end_writeback(struct folio *folio)
+void __folio_end_writeback(struct folio *folio)
{
long nr = folio_nr_pages(folio);
struct address_space *mapping = folio_mapping(folio);
- bool ret;
folio_memcg_lock(folio);
if (mapping && mapping_use_writeback_tags(mapping)) {
@@ -2953,19 +2952,16 @@ bool __folio_end_writeback(struct folio *folio)
unsigned long flags;
xa_lock_irqsave(&mapping->i_pages, flags);
- ret = folio_test_clear_writeback(folio);
- if (ret) {
- __xa_clear_mark(&mapping->i_pages, folio_index(folio),
- PAGECACHE_TAG_WRITEBACK);
- if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) {
- struct bdi_writeback *wb = inode_to_wb(inode);
-
- wb_stat_mod(wb, WB_WRITEBACK, -nr);
- __wb_writeout_add(wb, nr);
- if (!mapping_tagged(mapping,
- PAGECACHE_TAG_WRITEBACK))
- wb_inode_writeback_end(wb);
- }
+ folio_test_clear_writeback(folio);
+ __xa_clear_mark(&mapping->i_pages, folio_index(folio),
+ PAGECACHE_TAG_WRITEBACK);
+ if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) {
+ struct bdi_writeback *wb = inode_to_wb(inode);
+
+ wb_stat_mod(wb, WB_WRITEBACK, -nr);
+ __wb_writeout_add(wb, nr);
+ if (!mapping_tagged(mapping, PAGECACHE_TAG_WRITEBACK))
+ wb_inode_writeback_end(wb);
}
if (mapping->host && !mapping_tagged(mapping,
@@ -2974,15 +2970,13 @@ bool __folio_end_writeback(struct folio *folio)
xa_unlock_irqrestore(&mapping->i_pages, flags);
} else {
- ret = folio_test_clear_writeback(folio);
- }
- if (ret) {
- lruvec_stat_mod_folio(folio, NR_WRITEBACK, -nr);
- zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr);
- node_stat_mod_folio(folio, NR_WRITTEN, nr);
+ folio_test_clear_writeback(folio);
}
+
+ lruvec_stat_mod_folio(folio, NR_WRITEBACK, -nr);
+ zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr);
+ node_stat_mod_folio(folio, NR_WRITTEN, nr);
folio_memcg_unlock(folio);
- return ret;
}
bool __folio_start_writeback(struct folio *folio, bool keep_write)
--
2.40.1
next prev parent reply other threads:[~2023-10-04 16:53 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-04 16:53 [PATCH v2 00/17] Add folio_end_read Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 01/17] iomap: Hold state_lock over call to ifs_set_range_uptodate() Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 02/17] iomap: Protect read_bytes_pending with the state_lock Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 03/17] mm: Add folio_end_read() Matthew Wilcox (Oracle)
2024-02-23 15:26 ` Tetsuo Handa
2024-02-23 15:36 ` Matthew Wilcox
2023-10-04 16:53 ` [PATCH v2 04/17] ext4: Use folio_end_read() Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 05/17] buffer: " Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 06/17] iomap: " Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 07/17] bitops: Add xor_unlock_is_negative_byte() Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 08/17] alpha: Implement xor_unlock_is_negative_byte Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 09/17] m68k: " Matthew Wilcox (Oracle)
2023-10-04 23:49 ` Greg Ungerer
2023-10-05 8:11 ` Geert Uytterhoeven
2023-10-04 16:53 ` [PATCH v2 10/17] mips: " Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 11/17] powerpc: Implement arch_xor_unlock_is_negative_byte on 32-bit Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 12/17] riscv: Implement xor_unlock_is_negative_byte Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 13/17] s390: Implement arch_xor_unlock_is_negative_byte Matthew Wilcox (Oracle)
2023-10-04 16:53 ` [PATCH v2 14/17] mm: Delete checks for xor_unlock_is_negative_byte() Matthew Wilcox (Oracle)
2023-10-05 8:12 ` Geert Uytterhoeven
2023-10-04 16:53 ` [PATCH v2 15/17] mm: Add folio_xor_flags_has_waiters() Matthew Wilcox (Oracle)
2023-10-04 16:53 ` Matthew Wilcox (Oracle) [this message]
2023-10-04 16:53 ` [PATCH v2 17/17] mm: Use folio_xor_flags_has_waiters() in folio_end_writeback() Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231004165317.1061855-17-willy@infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npiggin@gmail.com \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).