From: Andreas Gruenbacher <agruenba@redhat.com>
To: cluster-devel@redhat.com
Subject: [Cluster-devel] [PATCH 2/4] gfs2: low-memory forced flush fixes
Date: Thu, 24 Aug 2023 23:10:59 +0200 [thread overview]
Message-ID: <20230824211101.3242346-3-agruenba@redhat.com> (raw)
In-Reply-To: <20230824211101.3242346-1-agruenba@redhat.com>
Function gfs2_ail_flush_reqd checks the SDF_FORCE_AIL_FLUSH flag to
determine if an AIL flush should be forced in low-memory situations.
However, it also immediately clears the flag, and when called repeatedly
as in function gfs2_logd, the flag will be lost. Fix that by pulling
the SDF_FORCE_AIL_FLUSH flag check out of gfs2_ail_flush_reqd.
In addition, in gfs2_writepages, logd needs to be woken up after setting
the SDF_FORCE_AIL_FLUSH flag.
Fixes: b066a4eebd4f ("gfs2: forcibly flush ail to relieve memory pressure")
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
---
fs/gfs2/aops.c | 4 +++-
fs/gfs2/log.c | 8 ++++----
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 5f02542370c4..d15a10a18962 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -189,8 +189,10 @@ static int gfs2_writepages(struct address_space *mapping,
* pages held in the ail that it can't find.
*/
ret = iomap_writepages(mapping, wbc, &wpc, &gfs2_writeback_ops);
- if (ret == 0)
+ if (ret == 0) {
set_bit(SDF_FORCE_AIL_FLUSH, &sdp->sd_flags);
+ wake_up(&sdp->sd_logd_waitq);
+ }
return ret;
}
diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
index d3da259820e3..aaca22f2aa2d 100644
--- a/fs/gfs2/log.c
+++ b/fs/gfs2/log.c
@@ -1282,9 +1282,6 @@ static inline int gfs2_ail_flush_reqd(struct gfs2_sbd *sdp)
{
unsigned int used_blocks = sdp->sd_jdesc->jd_blocks - atomic_read(&sdp->sd_log_blks_free);
- if (test_and_clear_bit(SDF_FORCE_AIL_FLUSH, &sdp->sd_flags))
- return 1;
-
return used_blocks + atomic_read(&sdp->sd_log_blks_needed) >=
atomic_read(&sdp->sd_log_thresh2);
}
@@ -1325,7 +1322,9 @@ int gfs2_logd(void *data)
GFS2_LFC_LOGD_JFLUSH_REQD);
}
- if (gfs2_ail_flush_reqd(sdp)) {
+ if (test_bit(SDF_FORCE_AIL_FLUSH, &sdp->sd_flags) ||
+ gfs2_ail_flush_reqd(sdp)) {
+ clear_bit(SDF_FORCE_AIL_FLUSH, &sdp->sd_flags);
gfs2_ail1_start(sdp);
gfs2_ail1_wait(sdp);
gfs2_ail1_empty(sdp, 0);
@@ -1338,6 +1337,7 @@ int gfs2_logd(void *data)
try_to_freeze();
t = wait_event_interruptible_timeout(sdp->sd_logd_waitq,
+ test_bit(SDF_FORCE_AIL_FLUSH, &sdp->sd_flags) ||
gfs2_ail_flush_reqd(sdp) ||
gfs2_jrnl_flush_reqd(sdp) ||
kthread_should_stop(),
--
2.40.1
next prev parent reply other threads:[~2023-08-24 21:11 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-24 21:10 [Cluster-devel] [PATCH 0/4] gfs2: logd cleanups on for-next Andreas Gruenbacher
2023-08-24 21:10 ` [Cluster-devel] [PATCH 1/4] gfs2: Switch to wait_event in gfs2_logd Andreas Gruenbacher
2023-08-24 21:10 ` Andreas Gruenbacher [this message]
2023-08-24 21:11 ` [Cluster-devel] [PATCH 3/4] gfs2: Fix logd wakeup on I/O error Andreas Gruenbacher
2023-08-24 21:11 ` [Cluster-devel] [PATCH 4/4] gfs2: journal flush threshold fixes and cleanup Andreas Gruenbacher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230824211101.3242346-3-agruenba@redhat.com \
--to=agruenba@redhat.com \
--cc=cluster-devel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).