linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
From: wangzijie <wangzijie1@honor.com>
To: <jaegeuk@kernel.org>, <chao@kernel.org>
Cc: wangzijie <wangzijie1@honor.com>,
	linux-kernel@vger.kernel.org, feng.han@honor.com,
	linux-f2fs-devel@lists.sourceforge.net
Subject: [f2fs-dev] [PATCH 1/2] f2fs: fix wrong extent_info data for precache extents
Date: Wed, 10 Sep 2025 21:58:34 +0800	[thread overview]
Message-ID: <20250910135835.2751574-1-wangzijie1@honor.com> (raw)

When the data layout is like this:
dnode1:                     dnode2:
[0]      A                  [0]    NEW_ADDR
[1]      A+1                [1]    0x0
...                         ....
[1016]   A+1016
[1017]   B (B!=A+1017)      [1017] 0x0

We can build this kind of layout by following steps(with i_extra_isize:36):
./f2fs_io write 1 0 1881 rand dsync testfile
./f2fs_io write 1 1881 1 rand buffered testfile
./f2fs_io fallocate 0 7708672 4096 testfile

And when we map first data block in dnode2, we will get wrong extent_info data:
map->m_len = 1
ofs = start_pgofs - map->m_lblk = 1882 - 1881 = 1

ei.fofs = start_pgofs = 1882
ei.len = map->m_len - ofs = 1 - 1 = 0

Fix it by skipping updating this kind of extent info.

Signed-off-by: wangzijie <wangzijie1@honor.com>
---
 fs/f2fs/data.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 7961e0ddf..b8bb71852 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1649,6 +1649,9 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag)
 
 		switch (flag) {
 		case F2FS_GET_BLOCK_PRECACHE:
+			if (__is_valid_data_blkaddr(map->m_pblk) &&
+				start_pgofs - map->m_lblk == map->m_len)
+				map->m_flags &= ~F2FS_MAP_MAPPED;
 			goto sync_out;
 		case F2FS_GET_BLOCK_BMAP:
 			map->m_pblk = 0;
-- 
2.25.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

             reply	other threads:[~2025-09-10 13:58 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-10 13:58 wangzijie [this message]
2025-09-10 13:58 ` [f2fs-dev] [PATCH 2/2] f2fs: fix infinite loop in __insert_extent_tree() wangzijie
2025-09-11  3:34 ` [f2fs-dev] [PATCH 1/2] f2fs: fix wrong extent_info data for precache extents Chao Yu via Linux-f2fs-devel
2025-09-11  6:55   ` wangzijie
2025-09-11  7:47     ` Chao Yu via Linux-f2fs-devel
2025-09-11  7:42   ` wangzijie
2025-09-11  8:19 ` Chao Yu via Linux-f2fs-devel
2025-09-11  9:07   ` wangzijie
2025-09-12  1:52     ` Chao Yu via Linux-f2fs-devel
2025-09-12  3:36       ` wangzijie
2025-09-12  3:41         ` Chao Yu via Linux-f2fs-devel
2025-09-12 10:06           ` wangzijie
2025-09-12 10:38             ` Chao Yu via Linux-f2fs-devel
2025-09-12 10:48               ` wangzijie
2025-09-12 10:39             ` wangzijie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250910135835.2751574-1-wangzijie1@honor.com \
    --to=wangzijie1@honor.com \
    --cc=chao@kernel.org \
    --cc=feng.han@honor.com \
    --cc=jaegeuk@kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).