From: Chao Yu <yuchao0@huawei.com>
To: <jaegeuk@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net
Subject: [f2fs-dev] [PATCH v2 RFC] f2fs: fix to force keeping write barrier for strict fsync mode
Date: Tue, 1 Jun 2021 18:10:24 +0800 [thread overview]
Message-ID: <20210601101024.119356-1-yuchao0@huawei.com> (raw)
[1] https://www.mail-archive.com/linux-f2fs-devel@lists.sourceforge.net/msg15126.html
As [1] reported, if lower device doesn't support write barrier, in below
case:
- write page #0; persist
- overwrite page #0
- fsync
- write data page #0 OPU into device's cache
- write inode page into device's cache
- issue flush
If SPO is triggered during flush command, inode page can be persisted
before data page #0, so that after recovery, inode page can be recovered
with new physical block address of data page #0, however there may
contains dummy data in new physical block address.
Then what user will see is: after overwrite & fsync + SPO, old data in
file was corrupted, if any user do care about such case, we can suggest
user to use STRICT fsync mode, in this mode, we will force to trigger
preflush command to persist data in device cache in prior to node
writeback, it avoids potential data corruption during fsync().
Signed-off-by: Chao Yu <yuchao0@huawei.com>
---
v2:
- fix this by adding additional preflush command rather than using
atomic write flow.
fs/f2fs/file.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 7d5311d54f63..238ca2a733ac 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -301,6 +301,20 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
f2fs_exist_written_data(sbi, ino, UPDATE_INO))
goto flush_out;
goto out;
+ } else {
+ /*
+ * for OPU case, during fsync(), node can be persisted before
+ * data when lower device doesn't support write barrier, result
+ * in data corruption after SPO.
+ * So for strict fsync mode, force to trigger preflush to keep
+ * data/node write order to avoid potential data corruption.
+ */
+ if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT &&
+ !atomic) {
+ ret = f2fs_issue_flush(sbi, inode->i_ino);
+ if (ret)
+ goto out;
+ }
}
go_write:
/*
--
2.29.2
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
next reply other threads:[~2021-06-01 10:10 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-01 10:10 Chao Yu [this message]
2021-06-03 16:00 ` [f2fs-dev] [PATCH v2 RFC] f2fs: fix to force keeping write barrier for strict fsync mode Chao Yu
2021-06-07 23:32 ` Chao Yu
2021-07-01 17:10 ` Jaegeuk Kim
2021-07-01 23:04 ` Chao Yu
2021-07-02 1:32 ` Jaegeuk Kim
2021-07-02 15:49 ` Chao Yu
2021-07-07 17:48 ` Jaegeuk Kim
2021-07-13 9:23 ` Chao Yu
2021-07-13 23:34 ` Jaegeuk Kim
2021-07-14 1:15 ` Chao Yu
2021-07-14 2:19 ` Jaegeuk Kim
2021-07-14 2:51 ` Chao Yu
2021-07-19 18:38 ` Jaegeuk Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210601101024.119356-1-yuchao0@huawei.com \
--to=yuchao0@huawei.com \
--cc=jaegeuk@kernel.org \
--cc=linux-f2fs-devel@lists.sourceforge.net \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).