From: Wenjie Cheng <cwjhust@gmail.com>
To: jaegeuk@kernel.org, chao@kernel.org
Cc: qwjhust@gmail.com, linux-kernel@vger.kernel.org,
Wenjie Cheng <cwjhust@gmail.com>,
linux-f2fs-devel@lists.sourceforge.net
Subject: [f2fs-dev] [PATCH] Revert "f2fs: use flush command instead of FUA for zoned device"
Date: Fri, 14 Jun 2024 00:48:41 +0000 [thread overview]
Message-ID: <20240614004841.103114-1-cwjhust@gmail.com> (raw)
This reverts commit c550e25bca660ed2554cbb48d32b82d0bb98e4b1.
Commit c550e25bca660ed2554cbb48d32b82d0bb98e4b1 ("f2fs: use flush
command instead of FUA for zoned device") used additional flush
command to keep write order.
Since Commit dd291d77cc90eb6a86e9860ba8e6e38eebd57d12 ("block:
Introduce zone write plugging") has enabled the block layer to
handle this order issue, there is no need to use flush command.
Signed-off-by: Wenjie Cheng <cwjhust@gmail.com>
---
fs/f2fs/file.c | 3 +--
fs/f2fs/node.c | 2 +-
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index eae2e7908072..f08e6208e183 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -372,8 +372,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
f2fs_remove_ino_entry(sbi, ino, APPEND_INO);
clear_inode_flag(inode, FI_APPEND_WRITE);
flush_out:
- if ((!atomic && F2FS_OPTION(sbi).fsync_mode != FSYNC_MODE_NOBARRIER) ||
- (atomic && !test_opt(sbi, NOBARRIER) && f2fs_sb_has_blkzoned(sbi)))
+ if (!atomic && F2FS_OPTION(sbi).fsync_mode != FSYNC_MODE_NOBARRIER)
ret = f2fs_issue_flush(sbi, inode->i_ino);
if (!ret) {
f2fs_remove_ino_entry(sbi, ino, UPDATE_INO);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 144f9f966690..c45d341dcf6e 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1631,7 +1631,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
goto redirty_out;
}
- if (atomic && !test_opt(sbi, NOBARRIER) && !f2fs_sb_has_blkzoned(sbi))
+ if (atomic && !test_opt(sbi, NOBARRIER))
fio.op_flags |= REQ_PREFLUSH | REQ_FUA;
/* should add to global list before clearing PAGECACHE status */
--
2.34.1
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
next reply other threads:[~2024-06-14 0:49 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-14 0:48 Wenjie Cheng [this message]
2024-06-20 3:20 ` [f2fs-dev] [PATCH] Revert "f2fs: use flush command instead of FUA for zoned device" Chao Yu
[not found] ` <CGME20240620032223epcas2p4d6b770a8e256d140e5296df8a386418e@epcms2p1>
2024-06-20 5:56 ` [f2fs-dev] (2) " Daejun Park
2024-06-20 7:12 ` Chao Yu
2024-06-20 7:22 ` [f2fs-dev] (2) " Daejun Park
2024-06-20 7:27 ` Chao Yu
[not found] ` <CGME20240620032223epcas2p4d6b770a8e256d140e5296df8a386418e@epcms2p3>
2024-06-20 7:56 ` [f2fs-dev] (2) " Daejun Park
2024-06-20 8:01 ` Chao Yu
2024-07-26 1:11 ` [f2fs-dev] " Chao Yu
2024-08-05 23:30 ` patchwork-bot+f2fs
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240614004841.103114-1-cwjhust@gmail.com \
--to=cwjhust@gmail.com \
--cc=chao@kernel.org \
--cc=jaegeuk@kernel.org \
--cc=linux-f2fs-devel@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
--cc=qwjhust@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).