* [PATCH v3 0/5] Speed up f2fs truncate
@ 2024-11-04 3:45 Yi Sun
2024-11-04 3:45 ` [PATCH v3 1/5] f2fs: expand f2fs_invalidate_compress_page() to f2fs_invalidate_compress_pages_range() Yi Sun
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Yi Sun @ 2024-11-04 3:45 UTC (permalink / raw)
To: chao, jaegeuk
Cc: yi.sun, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().
If some blocks are continuous, we can process these blocks
at the same time. This can reduce the number of calls to
the down_write() and the up_write(), thereby improving the
overall speed of doing truncate.
Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt
Time Comparison of rm:
original optimization ratio
7.17s 3.27s 54.39%
Yi Sun (5):
f2fs: expand f2fs_invalidate_compress_page() to
f2fs_invalidate_compress_pages_range()
f2fs: add parameter @len to f2fs_invalidate_internal_cache()
f2fs: introduce update_sit_entry_for_release()
f2fs: add parameter @len to f2fs_invalidate_blocks()
f2fs: Optimize f2fs_truncate_data_blocks_range()
fs/f2fs/compress.c | 9 +--
fs/f2fs/data.c | 2 +-
fs/f2fs/f2fs.h | 16 +++---
fs/f2fs/file.c | 78 ++++++++++++++++++++++---
fs/f2fs/gc.c | 2 +-
fs/f2fs/node.c | 4 +-
fs/f2fs/segment.c | 139 +++++++++++++++++++++++++++++++--------------
7 files changed, 184 insertions(+), 66 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/5] f2fs: expand f2fs_invalidate_compress_page() to f2fs_invalidate_compress_pages_range()
2024-11-04 3:45 [PATCH v3 0/5] Speed up f2fs truncate Yi Sun
@ 2024-11-04 3:45 ` Yi Sun
2024-11-04 3:45 ` [PATCH v3 2/5] f2fs: add parameter @len to f2fs_invalidate_internal_cache() Yi Sun
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Yi Sun @ 2024-11-04 3:45 UTC (permalink / raw)
To: chao, jaegeuk
Cc: yi.sun, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
New function f2fs_invalidate_compress_pages_range() adds the @len
parameter. So it can process some consecutive blocks at a time.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
fs/f2fs/compress.c | 5 +++--
fs/f2fs/f2fs.h | 9 +++++----
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 7f26440e8595..f6626f2feb0c 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1903,11 +1903,12 @@ struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi)
return sbi->compress_inode->i_mapping;
}
-void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr)
+void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
+ block_t blkaddr, unsigned int len)
{
if (!sbi->compress_inode)
return;
- invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr);
+ invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + len - 1);
}
void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 119706dbaefa..2b32443d06a3 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -4383,7 +4383,8 @@ void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi);
int __init f2fs_init_compress_cache(void);
void f2fs_destroy_compress_cache(void);
struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi);
-void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi, block_t blkaddr);
+void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
+ block_t blkaddr, unsigned int len);
void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
nid_t ino, block_t blkaddr);
bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
@@ -4438,8 +4439,8 @@ static inline int f2fs_init_page_array_cache(struct f2fs_sb_info *sbi) { return
static inline void f2fs_destroy_page_array_cache(struct f2fs_sb_info *sbi) { }
static inline int __init f2fs_init_compress_cache(void) { return 0; }
static inline void f2fs_destroy_compress_cache(void) { }
-static inline void f2fs_invalidate_compress_page(struct f2fs_sb_info *sbi,
- block_t blkaddr) { }
+static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
+ block_t blkaddr, unsigned int len) { }
static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi,
struct page *page, nid_t ino, block_t blkaddr) { }
static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
@@ -4758,7 +4759,7 @@ static inline void f2fs_invalidate_internal_cache(struct f2fs_sb_info *sbi,
block_t blkaddr)
{
f2fs_truncate_meta_inode_pages(sbi, blkaddr, 1);
- f2fs_invalidate_compress_page(sbi, blkaddr);
+ f2fs_invalidate_compress_pages_range(sbi, blkaddr, 1);
}
#define EFSBADCRC EBADMSG /* Bad CRC detected */
--
2.25.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/5] f2fs: add parameter @len to f2fs_invalidate_internal_cache()
2024-11-04 3:45 [PATCH v3 0/5] Speed up f2fs truncate Yi Sun
2024-11-04 3:45 ` [PATCH v3 1/5] f2fs: expand f2fs_invalidate_compress_page() to f2fs_invalidate_compress_pages_range() Yi Sun
@ 2024-11-04 3:45 ` Yi Sun
2024-11-04 3:45 ` [PATCH v3 3/5] f2fs: introduce update_sit_entry_for_release() Yi Sun
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: Yi Sun @ 2024-11-04 3:45 UTC (permalink / raw)
To: chao, jaegeuk
Cc: yi.sun, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
New function can process some consecutive blocks at a time.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
Reviewed-by: Chao Yu <chao@kernel.org>
---
fs/f2fs/data.c | 2 +-
fs/f2fs/f2fs.h | 6 +++---
fs/f2fs/gc.c | 2 +-
fs/f2fs/segment.c | 6 +++---
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 306b86b0595d..4f295b6b3c3f 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1420,7 +1420,7 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
return err;
if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO)
- f2fs_invalidate_internal_cache(sbi, old_blkaddr);
+ f2fs_invalidate_internal_cache(sbi, old_blkaddr, 1);
f2fs_update_data_blkaddr(dn, dn->data_blkaddr);
return 0;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 2b32443d06a3..a1c9341789a1 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -4756,10 +4756,10 @@ static inline void f2fs_truncate_meta_inode_pages(struct f2fs_sb_info *sbi,
}
static inline void f2fs_invalidate_internal_cache(struct f2fs_sb_info *sbi,
- block_t blkaddr)
+ block_t blkaddr, unsigned int len)
{
- f2fs_truncate_meta_inode_pages(sbi, blkaddr, 1);
- f2fs_invalidate_compress_pages_range(sbi, blkaddr, 1);
+ f2fs_truncate_meta_inode_pages(sbi, blkaddr, len);
+ f2fs_invalidate_compress_pages_range(sbi, blkaddr, len);
}
#define EFSBADCRC EBADMSG /* Bad CRC detected */
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 3e1b6d2ff3a7..7cc7a77d13f6 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -1412,7 +1412,7 @@ static int move_data_block(struct inode *inode, block_t bidx,
page_address(mpage), PAGE_SIZE);
f2fs_put_page(mpage, 1);
- f2fs_invalidate_internal_cache(fio.sbi, fio.old_blkaddr);
+ f2fs_invalidate_internal_cache(fio.sbi, fio.old_blkaddr, 1);
set_page_dirty(fio.encrypted_page);
if (clear_page_dirty_for_io(fio.encrypted_page))
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index edf2a74207b3..5386ae18d808 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -2535,7 +2535,7 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
return;
- f2fs_invalidate_internal_cache(sbi, addr);
+ f2fs_invalidate_internal_cache(sbi, addr, 1);
/* add it into sit main buffer */
down_write(&sit_i->sentry_lock);
@@ -3855,7 +3855,7 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio)
goto out;
}
if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO)
- f2fs_invalidate_internal_cache(fio->sbi, fio->old_blkaddr);
+ f2fs_invalidate_internal_cache(fio->sbi, fio->old_blkaddr, 1);
/* writeout dirty page into bdev */
f2fs_submit_page_write(fio);
@@ -4047,7 +4047,7 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
update_sit_entry(sbi, new_blkaddr, 1);
}
if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) {
- f2fs_invalidate_internal_cache(sbi, old_blkaddr);
+ f2fs_invalidate_internal_cache(sbi, old_blkaddr, 1);
if (!from_gc)
update_segment_mtime(sbi, old_blkaddr, 0);
update_sit_entry(sbi, old_blkaddr, -1);
--
2.25.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 3/5] f2fs: introduce update_sit_entry_for_release()
2024-11-04 3:45 [PATCH v3 0/5] Speed up f2fs truncate Yi Sun
2024-11-04 3:45 ` [PATCH v3 1/5] f2fs: expand f2fs_invalidate_compress_page() to f2fs_invalidate_compress_pages_range() Yi Sun
2024-11-04 3:45 ` [PATCH v3 2/5] f2fs: add parameter @len to f2fs_invalidate_internal_cache() Yi Sun
@ 2024-11-04 3:45 ` Yi Sun
2024-12-20 21:22 ` Jaegeuk Kim
2024-11-04 3:45 ` [PATCH v3 4/5] f2fs: add parameter @len to f2fs_invalidate_blocks() Yi Sun
2024-11-04 3:45 ` [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range() Yi Sun
4 siblings, 1 reply; 10+ messages in thread
From: Yi Sun @ 2024-11-04 3:45 UTC (permalink / raw)
To: chao, jaegeuk
Cc: yi.sun, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
This function can process some consecutive blocks at a time.
When using update_sit_entry() to release consecutive blocks,
ensure that the consecutive blocks belong to the same segment.
Because after update_sit_entry_for_realese(), @segno is still
in use in update_sit_entry().
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
fs/f2fs/segment.c | 103 ++++++++++++++++++++++++++++++----------------
1 file changed, 68 insertions(+), 35 deletions(-)
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 5386ae18d808..843171ce414b 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -2424,6 +2424,70 @@ static void update_segment_mtime(struct f2fs_sb_info *sbi, block_t blkaddr,
SIT_I(sbi)->max_mtime = ctime;
}
+/*
+ * NOTE: when updating multiple blocks at the same time, please ensure
+ * that the consecutive input blocks belong to the same segment.
+ */
+
+static int update_sit_entry_for_release(struct f2fs_sb_info *sbi, struct seg_entry *se,
+ block_t blkaddr, unsigned int offset, int del)
+{
+ bool exist;
+#ifdef CONFIG_F2FS_CHECK_FS
+ bool mir_exist;
+#endif
+ int i;
+ int del_count = -del;
+
+ f2fs_bug_on(sbi, GET_SEGNO(sbi, blkaddr) != GET_SEGNO(sbi, blkaddr + del_count - 1));
+
+ for (i = 0; i < del_count; i++) {
+ exist = f2fs_test_and_clear_bit(offset + i, se->cur_valid_map);
+#ifdef CONFIG_F2FS_CHECK_FS
+ mir_exist = f2fs_test_and_clear_bit(offset + i,
+ se->cur_valid_map_mir);
+ if (unlikely(exist != mir_exist)) {
+ f2fs_err(sbi, "Inconsistent error when clearing bitmap, blk:%u, old bit:%d",
+ blkaddr + i, exist);
+ f2fs_bug_on(sbi, 1);
+ }
+#endif
+ if (unlikely(!exist)) {
+ f2fs_err(sbi, "Bitmap was wrongly cleared, blk:%u",
+ blkaddr + i);
+ f2fs_bug_on(sbi, 1);
+ se->valid_blocks++;
+ del += 1;
+ } else if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
+ /*
+ * If checkpoints are off, we must not reuse data that
+ * was used in the previous checkpoint. If it was used
+ * before, we must track that to know how much space we
+ * really have.
+ */
+ if (f2fs_test_bit(offset + i, se->ckpt_valid_map)) {
+ spin_lock(&sbi->stat_lock);
+ sbi->unusable_block_count++;
+ spin_unlock(&sbi->stat_lock);
+ }
+ }
+
+ if (f2fs_block_unit_discard(sbi) &&
+ f2fs_test_and_clear_bit(offset + i, se->discard_map))
+ sbi->discard_blks++;
+
+ if (!f2fs_test_bit(offset + i, se->ckpt_valid_map))
+ se->ckpt_valid_blocks -= 1;
+ }
+
+ return del;
+}
+
+/*
+ * If releasing blocks, this function supports updating multiple consecutive blocks
+ * at one time, but please note that these consecutive blocks need to belong to the
+ * same segment.
+ */
static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
{
struct seg_entry *se;
@@ -2479,43 +2543,12 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map))
se->ckpt_valid_blocks++;
}
- } else {
- exist = f2fs_test_and_clear_bit(offset, se->cur_valid_map);
-#ifdef CONFIG_F2FS_CHECK_FS
- mir_exist = f2fs_test_and_clear_bit(offset,
- se->cur_valid_map_mir);
- if (unlikely(exist != mir_exist)) {
- f2fs_err(sbi, "Inconsistent error when clearing bitmap, blk:%u, old bit:%d",
- blkaddr, exist);
- f2fs_bug_on(sbi, 1);
- }
-#endif
- if (unlikely(!exist)) {
- f2fs_err(sbi, "Bitmap was wrongly cleared, blk:%u",
- blkaddr);
- f2fs_bug_on(sbi, 1);
- se->valid_blocks++;
- del = 0;
- } else if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
- /*
- * If checkpoints are off, we must not reuse data that
- * was used in the previous checkpoint. If it was used
- * before, we must track that to know how much space we
- * really have.
- */
- if (f2fs_test_bit(offset, se->ckpt_valid_map)) {
- spin_lock(&sbi->stat_lock);
- sbi->unusable_block_count++;
- spin_unlock(&sbi->stat_lock);
- }
- }
- if (f2fs_block_unit_discard(sbi) &&
- f2fs_test_and_clear_bit(offset, se->discard_map))
- sbi->discard_blks++;
+ if (!f2fs_test_bit(offset, se->ckpt_valid_map))
+ se->ckpt_valid_blocks += del;
+ } else {
+ del = update_sit_entry_for_release(sbi, se, blkaddr, offset, del);
}
- if (!f2fs_test_bit(offset, se->ckpt_valid_map))
- se->ckpt_valid_blocks += del;
__mark_sit_entry_dirty(sbi, segno);
--
2.25.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 4/5] f2fs: add parameter @len to f2fs_invalidate_blocks()
2024-11-04 3:45 [PATCH v3 0/5] Speed up f2fs truncate Yi Sun
` (2 preceding siblings ...)
2024-11-04 3:45 ` [PATCH v3 3/5] f2fs: introduce update_sit_entry_for_release() Yi Sun
@ 2024-11-04 3:45 ` Yi Sun
2024-11-04 3:45 ` [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range() Yi Sun
4 siblings, 0 replies; 10+ messages in thread
From: Yi Sun @ 2024-11-04 3:45 UTC (permalink / raw)
To: chao, jaegeuk
Cc: yi.sun, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
New function can process some consecutive blocks at a time.
Function f2fs_invalidate_blocks()->down_write() and up_write()
are very time-consuming, so if f2fs_invalidate_blocks() can
process consecutive blocks at one time, it will save a lot of time.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
fs/f2fs/compress.c | 4 ++--
fs/f2fs/f2fs.h | 3 ++-
fs/f2fs/file.c | 8 ++++----
fs/f2fs/node.c | 4 ++--
fs/f2fs/segment.c | 32 +++++++++++++++++++++++++-------
5 files changed, 35 insertions(+), 16 deletions(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index f6626f2feb0c..666912c1293e 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1374,7 +1374,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
if (blkaddr == COMPRESS_ADDR)
fio.compr_blocks++;
if (__is_valid_data_blkaddr(blkaddr))
- f2fs_invalidate_blocks(sbi, blkaddr);
+ f2fs_invalidate_blocks(sbi, blkaddr, 1);
f2fs_update_data_blkaddr(&dn, COMPRESS_ADDR);
goto unlock_continue;
}
@@ -1384,7 +1384,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
if (i > cc->valid_nr_cpages) {
if (__is_valid_data_blkaddr(blkaddr)) {
- f2fs_invalidate_blocks(sbi, blkaddr);
+ f2fs_invalidate_blocks(sbi, blkaddr, 1);
f2fs_update_data_blkaddr(&dn, NEW_ADDR);
}
goto unlock_continue;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index a1c9341789a1..d8691b834aaf 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3714,7 +3714,8 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino);
int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
-void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
+void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr,
+ unsigned int len);
bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index b619f7e55640..9366e7fc7c39 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -652,7 +652,7 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
valid_blocks++;
}
- f2fs_invalidate_blocks(sbi, blkaddr);
+ f2fs_invalidate_blocks(sbi, blkaddr, 1);
if (!released || blkaddr != COMPRESS_ADDR)
nr_free++;
@@ -750,7 +750,7 @@ int f2fs_do_truncate_blocks(struct inode *inode, u64 from, bool lock)
unsigned int i;
for (i = 0; i < ei.len; i++)
- f2fs_invalidate_blocks(sbi, ei.blk + i);
+ f2fs_invalidate_blocks(sbi, ei.blk + i, 1);
dec_valid_block_count(sbi, inode, ei.len);
f2fs_update_time(sbi, REQ_TIME);
@@ -1319,7 +1319,7 @@ static int __roll_back_blkaddrs(struct inode *inode, block_t *blkaddr,
ret = f2fs_get_dnode_of_data(&dn, off + i, LOOKUP_NODE_RA);
if (ret) {
dec_valid_block_count(sbi, inode, 1);
- f2fs_invalidate_blocks(sbi, *blkaddr);
+ f2fs_invalidate_blocks(sbi, *blkaddr, 1);
} else {
f2fs_update_data_blkaddr(&dn, *blkaddr);
}
@@ -1571,7 +1571,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
break;
}
- f2fs_invalidate_blocks(sbi, dn->data_blkaddr);
+ f2fs_invalidate_blocks(sbi, dn->data_blkaddr, 1);
f2fs_set_data_blkaddr(dn, NEW_ADDR);
}
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 7d904f2bd5b6..bb0261db5fd3 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -916,7 +916,7 @@ static int truncate_node(struct dnode_of_data *dn)
}
/* Deallocate node address */
- f2fs_invalidate_blocks(sbi, ni.blk_addr);
+ f2fs_invalidate_blocks(sbi, ni.blk_addr, 1);
dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
set_node_addr(sbi, &ni, NULL_ADDR, false);
@@ -2758,7 +2758,7 @@ int f2fs_recover_xattr_data(struct inode *inode, struct page *page)
if (err)
return err;
- f2fs_invalidate_blocks(sbi, ni.blk_addr);
+ f2fs_invalidate_blocks(sbi, ni.blk_addr, 1);
dec_valid_node_count(sbi, inode, false);
set_node_addr(sbi, &ni, NULL_ADDR, false);
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 843171ce414b..ad0007294a3f 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -245,7 +245,7 @@ static int __replace_atomic_write_block(struct inode *inode, pgoff_t index,
if (!__is_valid_data_blkaddr(new_addr)) {
if (new_addr == NULL_ADDR)
dec_valid_block_count(sbi, inode, 1);
- f2fs_invalidate_blocks(sbi, dn.data_blkaddr);
+ f2fs_invalidate_blocks(sbi, dn.data_blkaddr, 1);
f2fs_update_data_blkaddr(&dn, new_addr);
} else {
f2fs_replace_block(sbi, &dn, dn.data_blkaddr,
@@ -2559,25 +2559,43 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
get_sec_entry(sbi, segno)->valid_blocks += del;
}
-void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
+void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr,
+ unsigned int len)
{
unsigned int segno = GET_SEGNO(sbi, addr);
struct sit_info *sit_i = SIT_I(sbi);
+ block_t addr_start = addr, addr_end = addr + len - 1;
+ unsigned int seg_num = GET_SEGNO(sbi, addr_end) - segno + 1;
+ unsigned int i = 1, max_blocks = sbi->blocks_per_seg, cnt;
f2fs_bug_on(sbi, addr == NULL_ADDR);
if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
return;
- f2fs_invalidate_internal_cache(sbi, addr, 1);
+ f2fs_invalidate_internal_cache(sbi, addr, len);
/* add it into sit main buffer */
down_write(&sit_i->sentry_lock);
- update_segment_mtime(sbi, addr, 0);
- update_sit_entry(sbi, addr, -1);
+ if (seg_num == 1)
+ cnt = len;
+ else
+ cnt = max_blocks - GET_BLKOFF_FROM_SEG0(sbi, addr);
- /* add it into dirty seglist */
- locate_dirty_segment(sbi, segno);
+ do {
+ update_segment_mtime(sbi, addr_start, 0);
+ update_sit_entry(sbi, addr_start, -cnt);
+
+ /* add it into dirty seglist */
+ locate_dirty_segment(sbi, segno);
+
+ /* update @addr_start and @cnt and @segno */
+ addr_start = START_BLOCK(sbi, ++segno);
+ if (++i == seg_num)
+ cnt = GET_BLKOFF_FROM_SEG0(sbi, addr_end) + 1;
+ else
+ cnt = max_blocks;
+ } while (i <= seg_num);
up_write(&sit_i->sentry_lock);
}
--
2.25.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range()
2024-11-04 3:45 [PATCH v3 0/5] Speed up f2fs truncate Yi Sun
` (3 preceding siblings ...)
2024-11-04 3:45 ` [PATCH v3 4/5] f2fs: add parameter @len to f2fs_invalidate_blocks() Yi Sun
@ 2024-11-04 3:45 ` Yi Sun
2024-12-11 3:08 ` yi sun
4 siblings, 1 reply; 10+ messages in thread
From: Yi Sun @ 2024-11-04 3:45 UTC (permalink / raw)
To: chao, jaegeuk
Cc: yi.sun, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
Function f2fs_invalidate_blocks() can process continuous
blocks at a time, so f2fs_truncate_data_blocks_range() is
optimized to use the new functionality of
f2fs_invalidate_blocks().
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
fs/f2fs/file.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 68 insertions(+), 4 deletions(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 9366e7fc7c39..d20cc5f36d4c 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -612,6 +612,15 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
return finish_preallocate_blocks(inode);
}
+static bool check_curr_block_is_consecutive(struct f2fs_sb_info *sbi,
+ block_t curr, block_t end)
+{
+ if (curr - end == 1 || curr == end)
+ return true;
+ else
+ return false;
+}
+
void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
@@ -621,8 +630,27 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
int cluster_index = 0, valid_blocks = 0;
int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
+ /*
+ * Temporary record location.
+ * When the current @blkaddr and @blkaddr_end can be processed
+ * together, update the value of @blkaddr_end.
+ * When it is detected that current @blkaddr is not continues with
+ * @blkaddr_end, it is necessary to process continues blocks
+ * range [blkaddr_start, blkaddr_end].
+ */
+ block_t blkaddr_start, blkaddr_end;
+ /*.
+ * To avoid processing various invalid data blocks.
+ * Because @blkaddr_start and @blkaddr_end may be assigned
+ * NULL_ADDR or invalid data blocks, @last_valid is used to
+ * record this situation.
+ */
+ bool last_valid = false;
+ /* Process the last @blkaddr separately? */
+ bool last_one = true;
addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
+ blkaddr_start = blkaddr_end = le32_to_cpu(*addr);
/* Assumption: truncation starts with cluster */
for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) {
@@ -638,24 +666,60 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
}
if (blkaddr == NULL_ADDR)
- continue;
+ goto next;
f2fs_set_data_blkaddr(dn, NULL_ADDR);
if (__is_valid_data_blkaddr(blkaddr)) {
if (time_to_inject(sbi, FAULT_BLKADDR_CONSISTENCE))
- continue;
+ goto next;
if (!f2fs_is_valid_blkaddr_raw(sbi, blkaddr,
DATA_GENERIC_ENHANCE))
- continue;
+ goto next;
if (compressed_cluster)
valid_blocks++;
}
- f2fs_invalidate_blocks(sbi, blkaddr, 1);
+
+ if (check_curr_block_is_consecutive(sbi, blkaddr, blkaddr_end)) {
+ /*
+ * The current block @blkaddr is continuous with
+ * @blkaddr_end, so @blkaddr_end is updated.
+ * And the f2fs_invalidate_blocks() is skipped
+ * until @blkaddr that cannot be processed
+ * together is encountered.
+ */
+ blkaddr_end = blkaddr;
+ if (count == 1)
+ last_one = false;
+ else
+ goto skip_invalid;
+ }
+
+ f2fs_invalidate_blocks(sbi, blkaddr_start,
+ blkaddr_end - blkaddr_start + 1);
+ blkaddr_start = blkaddr_end = blkaddr;
+
+ if (count == 1 && last_one)
+ f2fs_invalidate_blocks(sbi, blkaddr, 1);
+
+skip_invalid:
+ last_valid = true;
if (!released || blkaddr != COMPRESS_ADDR)
nr_free++;
+
+ continue;
+
+next:
+ /* If consecutive blocks have been recorded, we need to process them. */
+ if (last_valid == true)
+ f2fs_invalidate_blocks(sbi, blkaddr_start,
+ blkaddr_end - blkaddr_start + 1);
+
+ blkaddr_start = blkaddr_end = le32_to_cpu(*(addr + 1));
+ last_valid = false;
+
}
if (compressed_cluster)
--
2.25.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range()
2024-11-04 3:45 ` [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range() Yi Sun
@ 2024-12-11 3:08 ` yi sun
2024-12-12 16:22 ` Jaegeuk Kim
0 siblings, 1 reply; 10+ messages in thread
From: yi sun @ 2024-12-11 3:08 UTC (permalink / raw)
To: Yi Sun
Cc: chao, jaegeuk, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
Kindly ping.
I think there are no problems with the first few patches, but the
current patch may still have room for improvement. Do you have any
good suggestions?
On Mon, Nov 4, 2024 at 11:46 AM Yi Sun <yi.sun@unisoc.com> wrote:
>
> Function f2fs_invalidate_blocks() can process continuous
> blocks at a time, so f2fs_truncate_data_blocks_range() is
> optimized to use the new functionality of
> f2fs_invalidate_blocks().
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> ---
> fs/f2fs/file.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 68 insertions(+), 4 deletions(-)
>
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index 9366e7fc7c39..d20cc5f36d4c 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -612,6 +612,15 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> return finish_preallocate_blocks(inode);
> }
>
> +static bool check_curr_block_is_consecutive(struct f2fs_sb_info *sbi,
> + block_t curr, block_t end)
> +{
> + if (curr - end == 1 || curr == end)
> + return true;
> + else
> + return false;
> +}
> +
> void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> {
> struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> @@ -621,8 +630,27 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> int cluster_index = 0, valid_blocks = 0;
> int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> + /*
> + * Temporary record location.
> + * When the current @blkaddr and @blkaddr_end can be processed
> + * together, update the value of @blkaddr_end.
> + * When it is detected that current @blkaddr is not continues with
> + * @blkaddr_end, it is necessary to process continues blocks
> + * range [blkaddr_start, blkaddr_end].
> + */
> + block_t blkaddr_start, blkaddr_end;
> + /*.
> + * To avoid processing various invalid data blocks.
> + * Because @blkaddr_start and @blkaddr_end may be assigned
> + * NULL_ADDR or invalid data blocks, @last_valid is used to
> + * record this situation.
> + */
> + bool last_valid = false;
> + /* Process the last @blkaddr separately? */
> + bool last_one = true;
>
> addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
> + blkaddr_start = blkaddr_end = le32_to_cpu(*addr);
>
> /* Assumption: truncation starts with cluster */
> for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) {
> @@ -638,24 +666,60 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> }
>
> if (blkaddr == NULL_ADDR)
> - continue;
> + goto next;
>
> f2fs_set_data_blkaddr(dn, NULL_ADDR);
>
> if (__is_valid_data_blkaddr(blkaddr)) {
> if (time_to_inject(sbi, FAULT_BLKADDR_CONSISTENCE))
> - continue;
> + goto next;
> if (!f2fs_is_valid_blkaddr_raw(sbi, blkaddr,
> DATA_GENERIC_ENHANCE))
> - continue;
> + goto next;
> if (compressed_cluster)
> valid_blocks++;
> }
>
> - f2fs_invalidate_blocks(sbi, blkaddr, 1);
> +
> + if (check_curr_block_is_consecutive(sbi, blkaddr, blkaddr_end)) {
> + /*
> + * The current block @blkaddr is continuous with
> + * @blkaddr_end, so @blkaddr_end is updated.
> + * And the f2fs_invalidate_blocks() is skipped
> + * until @blkaddr that cannot be processed
> + * together is encountered.
> + */
> + blkaddr_end = blkaddr;
> + if (count == 1)
> + last_one = false;
> + else
> + goto skip_invalid;
> + }
> +
> + f2fs_invalidate_blocks(sbi, blkaddr_start,
> + blkaddr_end - blkaddr_start + 1);
> + blkaddr_start = blkaddr_end = blkaddr;
> +
> + if (count == 1 && last_one)
> + f2fs_invalidate_blocks(sbi, blkaddr, 1);
> +
> +skip_invalid:
> + last_valid = true;
>
> if (!released || blkaddr != COMPRESS_ADDR)
> nr_free++;
> +
> + continue;
> +
> +next:
> + /* If consecutive blocks have been recorded, we need to process them. */
> + if (last_valid == true)
> + f2fs_invalidate_blocks(sbi, blkaddr_start,
> + blkaddr_end - blkaddr_start + 1);
> +
> + blkaddr_start = blkaddr_end = le32_to_cpu(*(addr + 1));
> + last_valid = false;
> +
> }
>
> if (compressed_cluster)
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range()
2024-12-11 3:08 ` yi sun
@ 2024-12-12 16:22 ` Jaegeuk Kim
2024-12-18 8:00 ` yi sun
0 siblings, 1 reply; 10+ messages in thread
From: Jaegeuk Kim @ 2024-12-12 16:22 UTC (permalink / raw)
To: yi sun
Cc: Yi Sun, chao, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
On 12/11, yi sun wrote:
> Kindly ping.
> I think there are no problems with the first few patches, but the
> current patch may still have room for improvement. Do you have any
> good suggestions?
Hi, may I ask for some basic tests? Have you run xfstests?
>
> On Mon, Nov 4, 2024 at 11:46 AM Yi Sun <yi.sun@unisoc.com> wrote:
> >
> > Function f2fs_invalidate_blocks() can process continuous
> > blocks at a time, so f2fs_truncate_data_blocks_range() is
> > optimized to use the new functionality of
> > f2fs_invalidate_blocks().
> >
> > Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> > ---
> > fs/f2fs/file.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 68 insertions(+), 4 deletions(-)
> >
> > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > index 9366e7fc7c39..d20cc5f36d4c 100644
> > --- a/fs/f2fs/file.c
> > +++ b/fs/f2fs/file.c
> > @@ -612,6 +612,15 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> > return finish_preallocate_blocks(inode);
> > }
> >
> > +static bool check_curr_block_is_consecutive(struct f2fs_sb_info *sbi,
> > + block_t curr, block_t end)
> > +{
> > + if (curr - end == 1 || curr == end)
> > + return true;
> > + else
> > + return false;
> > +}
> > +
> > void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > {
> > struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> > @@ -621,8 +630,27 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > int cluster_index = 0, valid_blocks = 0;
> > int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> > bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> > + /*
> > + * Temporary record location.
> > + * When the current @blkaddr and @blkaddr_end can be processed
> > + * together, update the value of @blkaddr_end.
> > + * When it is detected that current @blkaddr is not continues with
> > + * @blkaddr_end, it is necessary to process continues blocks
> > + * range [blkaddr_start, blkaddr_end].
> > + */
> > + block_t blkaddr_start, blkaddr_end;
> > + /*.
> > + * To avoid processing various invalid data blocks.
> > + * Because @blkaddr_start and @blkaddr_end may be assigned
> > + * NULL_ADDR or invalid data blocks, @last_valid is used to
> > + * record this situation.
> > + */
> > + bool last_valid = false;
> > + /* Process the last @blkaddr separately? */
> > + bool last_one = true;
> >
> > addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
> > + blkaddr_start = blkaddr_end = le32_to_cpu(*addr);
> >
> > /* Assumption: truncation starts with cluster */
> > for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) {
> > @@ -638,24 +666,60 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > }
> >
> > if (blkaddr == NULL_ADDR)
> > - continue;
> > + goto next;
> >
> > f2fs_set_data_blkaddr(dn, NULL_ADDR);
> >
> > if (__is_valid_data_blkaddr(blkaddr)) {
> > if (time_to_inject(sbi, FAULT_BLKADDR_CONSISTENCE))
> > - continue;
> > + goto next;
> > if (!f2fs_is_valid_blkaddr_raw(sbi, blkaddr,
> > DATA_GENERIC_ENHANCE))
> > - continue;
> > + goto next;
> > if (compressed_cluster)
> > valid_blocks++;
> > }
> >
> > - f2fs_invalidate_blocks(sbi, blkaddr, 1);
> > +
> > + if (check_curr_block_is_consecutive(sbi, blkaddr, blkaddr_end)) {
> > + /*
> > + * The current block @blkaddr is continuous with
> > + * @blkaddr_end, so @blkaddr_end is updated.
> > + * And the f2fs_invalidate_blocks() is skipped
> > + * until @blkaddr that cannot be processed
> > + * together is encountered.
> > + */
> > + blkaddr_end = blkaddr;
> > + if (count == 1)
> > + last_one = false;
> > + else
> > + goto skip_invalid;
> > + }
> > +
> > + f2fs_invalidate_blocks(sbi, blkaddr_start,
> > + blkaddr_end - blkaddr_start + 1);
> > + blkaddr_start = blkaddr_end = blkaddr;
> > +
> > + if (count == 1 && last_one)
> > + f2fs_invalidate_blocks(sbi, blkaddr, 1);
> > +
> > +skip_invalid:
> > + last_valid = true;
> >
> > if (!released || blkaddr != COMPRESS_ADDR)
> > nr_free++;
> > +
> > + continue;
> > +
> > +next:
> > + /* If consecutive blocks have been recorded, we need to process them. */
> > + if (last_valid == true)
> > + f2fs_invalidate_blocks(sbi, blkaddr_start,
> > + blkaddr_end - blkaddr_start + 1);
> > +
> > + blkaddr_start = blkaddr_end = le32_to_cpu(*(addr + 1));
> > + last_valid = false;
> > +
> > }
> >
> > if (compressed_cluster)
> > --
> > 2.25.1
> >
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range()
2024-12-12 16:22 ` Jaegeuk Kim
@ 2024-12-18 8:00 ` yi sun
0 siblings, 0 replies; 10+ messages in thread
From: yi sun @ 2024-12-18 8:00 UTC (permalink / raw)
To: Jaegeuk Kim
Cc: Yi Sun, chao, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
On Fri, Dec 13, 2024 at 12:22 AM Jaegeuk Kim <jaegeuk@kernel.org> wrote:
>
> On 12/11, yi sun wrote:
> > Kindly ping.
> > I think there are no problems with the first few patches, but the
> > current patch may still have room for improvement. Do you have any
> > good suggestions?
>
> Hi, may I ask for some basic tests? Have you run xfstests?
>
Yes, I used phones with Android15+kernel6.6 for basic testing,
including 48 hours of "monkey + file read, write and delete" testing
and xfstests.
No errors found.
> >
> > On Mon, Nov 4, 2024 at 11:46 AM Yi Sun <yi.sun@unisoc.com> wrote:
> > >
> > > Function f2fs_invalidate_blocks() can process continuous
> > > blocks at a time, so f2fs_truncate_data_blocks_range() is
> > > optimized to use the new functionality of
> > > f2fs_invalidate_blocks().
> > >
> > > Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> > > ---
> > > fs/f2fs/file.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++---
> > > 1 file changed, 68 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > > index 9366e7fc7c39..d20cc5f36d4c 100644
> > > --- a/fs/f2fs/file.c
> > > +++ b/fs/f2fs/file.c
> > > @@ -612,6 +612,15 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> > > return finish_preallocate_blocks(inode);
> > > }
> > >
> > > +static bool check_curr_block_is_consecutive(struct f2fs_sb_info *sbi,
> > > + block_t curr, block_t end)
> > > +{
> > > + if (curr - end == 1 || curr == end)
> > > + return true;
> > > + else
> > > + return false;
> > > +}
> > > +
> > > void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > > {
> > > struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> > > @@ -621,8 +630,27 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > > int cluster_index = 0, valid_blocks = 0;
> > > int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> > > bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> > > + /*
> > > + * Temporary record location.
> > > + * When the current @blkaddr and @blkaddr_end can be processed
> > > + * together, update the value of @blkaddr_end.
> > > + * When it is detected that current @blkaddr is not continues with
> > > + * @blkaddr_end, it is necessary to process continues blocks
> > > + * range [blkaddr_start, blkaddr_end].
> > > + */
> > > + block_t blkaddr_start, blkaddr_end;
> > > + /*.
> > > + * To avoid processing various invalid data blocks.
> > > + * Because @blkaddr_start and @blkaddr_end may be assigned
> > > + * NULL_ADDR or invalid data blocks, @last_valid is used to
> > > + * record this situation.
> > > + */
> > > + bool last_valid = false;
> > > + /* Process the last @blkaddr separately? */
> > > + bool last_one = true;
> > >
> > > addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
> > > + blkaddr_start = blkaddr_end = le32_to_cpu(*addr);
> > >
> > > /* Assumption: truncation starts with cluster */
> > > for (; count > 0; count--, addr++, dn->ofs_in_node++, cluster_index++) {
> > > @@ -638,24 +666,60 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > > }
> > >
> > > if (blkaddr == NULL_ADDR)
> > > - continue;
> > > + goto next;
> > >
> > > f2fs_set_data_blkaddr(dn, NULL_ADDR);
> > >
> > > if (__is_valid_data_blkaddr(blkaddr)) {
> > > if (time_to_inject(sbi, FAULT_BLKADDR_CONSISTENCE))
> > > - continue;
> > > + goto next;
> > > if (!f2fs_is_valid_blkaddr_raw(sbi, blkaddr,
> > > DATA_GENERIC_ENHANCE))
> > > - continue;
> > > + goto next;
> > > if (compressed_cluster)
> > > valid_blocks++;
> > > }
> > >
> > > - f2fs_invalidate_blocks(sbi, blkaddr, 1);
> > > +
> > > + if (check_curr_block_is_consecutive(sbi, blkaddr, blkaddr_end)) {
> > > + /*
> > > + * The current block @blkaddr is continuous with
> > > + * @blkaddr_end, so @blkaddr_end is updated.
> > > + * And the f2fs_invalidate_blocks() is skipped
> > > + * until @blkaddr that cannot be processed
> > > + * together is encountered.
> > > + */
> > > + blkaddr_end = blkaddr;
> > > + if (count == 1)
> > > + last_one = false;
> > > + else
> > > + goto skip_invalid;
> > > + }
> > > +
> > > + f2fs_invalidate_blocks(sbi, blkaddr_start,
> > > + blkaddr_end - blkaddr_start + 1);
> > > + blkaddr_start = blkaddr_end = blkaddr;
> > > +
> > > + if (count == 1 && last_one)
> > > + f2fs_invalidate_blocks(sbi, blkaddr, 1);
> > > +
> > > +skip_invalid:
> > > + last_valid = true;
> > >
> > > if (!released || blkaddr != COMPRESS_ADDR)
> > > nr_free++;
> > > +
> > > + continue;
> > > +
> > > +next:
> > > + /* If consecutive blocks have been recorded, we need to process them. */
> > > + if (last_valid == true)
> > > + f2fs_invalidate_blocks(sbi, blkaddr_start,
> > > + blkaddr_end - blkaddr_start + 1);
> > > +
> > > + blkaddr_start = blkaddr_end = le32_to_cpu(*(addr + 1));
> > > + last_valid = false;
> > > +
> > > }
> > >
> > > if (compressed_cluster)
> > > --
> > > 2.25.1
> > >
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 3/5] f2fs: introduce update_sit_entry_for_release()
2024-11-04 3:45 ` [PATCH v3 3/5] f2fs: introduce update_sit_entry_for_release() Yi Sun
@ 2024-12-20 21:22 ` Jaegeuk Kim
0 siblings, 0 replies; 10+ messages in thread
From: Jaegeuk Kim @ 2024-12-20 21:22 UTC (permalink / raw)
To: Yi Sun
Cc: chao, sunyibuaa, linux-f2fs-devel, linux-kernel, niuzhiguo84,
Hao_hao.Wang, ke.wang
This makes the code being inconsistent. Can you refactor first and add the loop
later separately?
For example,
1) add two functions, update_sit_entry_for_alloc() and update_sit_entry_for_release()
2) add a loop in update_sit_entry_for_release()
Thanks,
On 11/04, Yi Sun wrote:
> This function can process some consecutive blocks at a time.
>
> When using update_sit_entry() to release consecutive blocks,
> ensure that the consecutive blocks belong to the same segment.
> Because after update_sit_entry_for_realese(), @segno is still
> in use in update_sit_entry().
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> ---
> fs/f2fs/segment.c | 103 ++++++++++++++++++++++++++++++----------------
> 1 file changed, 68 insertions(+), 35 deletions(-)
>
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 5386ae18d808..843171ce414b 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -2424,6 +2424,70 @@ static void update_segment_mtime(struct f2fs_sb_info *sbi, block_t blkaddr,
> SIT_I(sbi)->max_mtime = ctime;
> }
>
> +/*
> + * NOTE: when updating multiple blocks at the same time, please ensure
> + * that the consecutive input blocks belong to the same segment.
> + */
> +
> +static int update_sit_entry_for_release(struct f2fs_sb_info *sbi, struct seg_entry *se,
> + block_t blkaddr, unsigned int offset, int del)
> +{
> + bool exist;
> +#ifdef CONFIG_F2FS_CHECK_FS
> + bool mir_exist;
> +#endif
> + int i;
> + int del_count = -del;
> +
> + f2fs_bug_on(sbi, GET_SEGNO(sbi, blkaddr) != GET_SEGNO(sbi, blkaddr + del_count - 1));
> +
> + for (i = 0; i < del_count; i++) {
> + exist = f2fs_test_and_clear_bit(offset + i, se->cur_valid_map);
> +#ifdef CONFIG_F2FS_CHECK_FS
> + mir_exist = f2fs_test_and_clear_bit(offset + i,
> + se->cur_valid_map_mir);
> + if (unlikely(exist != mir_exist)) {
> + f2fs_err(sbi, "Inconsistent error when clearing bitmap, blk:%u, old bit:%d",
> + blkaddr + i, exist);
> + f2fs_bug_on(sbi, 1);
> + }
> +#endif
> + if (unlikely(!exist)) {
> + f2fs_err(sbi, "Bitmap was wrongly cleared, blk:%u",
> + blkaddr + i);
> + f2fs_bug_on(sbi, 1);
> + se->valid_blocks++;
> + del += 1;
> + } else if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
> + /*
> + * If checkpoints are off, we must not reuse data that
> + * was used in the previous checkpoint. If it was used
> + * before, we must track that to know how much space we
> + * really have.
> + */
> + if (f2fs_test_bit(offset + i, se->ckpt_valid_map)) {
> + spin_lock(&sbi->stat_lock);
> + sbi->unusable_block_count++;
> + spin_unlock(&sbi->stat_lock);
> + }
> + }
> +
> + if (f2fs_block_unit_discard(sbi) &&
> + f2fs_test_and_clear_bit(offset + i, se->discard_map))
> + sbi->discard_blks++;
> +
> + if (!f2fs_test_bit(offset + i, se->ckpt_valid_map))
> + se->ckpt_valid_blocks -= 1;
> + }
> +
> + return del;
> +}
> +
> +/*
> + * If releasing blocks, this function supports updating multiple consecutive blocks
> + * at one time, but please note that these consecutive blocks need to belong to the
> + * same segment.
> + */
> static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
> {
> struct seg_entry *se;
> @@ -2479,43 +2543,12 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
> if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map))
> se->ckpt_valid_blocks++;
> }
> - } else {
> - exist = f2fs_test_and_clear_bit(offset, se->cur_valid_map);
> -#ifdef CONFIG_F2FS_CHECK_FS
> - mir_exist = f2fs_test_and_clear_bit(offset,
> - se->cur_valid_map_mir);
> - if (unlikely(exist != mir_exist)) {
> - f2fs_err(sbi, "Inconsistent error when clearing bitmap, blk:%u, old bit:%d",
> - blkaddr, exist);
> - f2fs_bug_on(sbi, 1);
> - }
> -#endif
> - if (unlikely(!exist)) {
> - f2fs_err(sbi, "Bitmap was wrongly cleared, blk:%u",
> - blkaddr);
> - f2fs_bug_on(sbi, 1);
> - se->valid_blocks++;
> - del = 0;
> - } else if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
> - /*
> - * If checkpoints are off, we must not reuse data that
> - * was used in the previous checkpoint. If it was used
> - * before, we must track that to know how much space we
> - * really have.
> - */
> - if (f2fs_test_bit(offset, se->ckpt_valid_map)) {
> - spin_lock(&sbi->stat_lock);
> - sbi->unusable_block_count++;
> - spin_unlock(&sbi->stat_lock);
> - }
> - }
>
> - if (f2fs_block_unit_discard(sbi) &&
> - f2fs_test_and_clear_bit(offset, se->discard_map))
> - sbi->discard_blks++;
> + if (!f2fs_test_bit(offset, se->ckpt_valid_map))
> + se->ckpt_valid_blocks += del;
> + } else {
> + del = update_sit_entry_for_release(sbi, se, blkaddr, offset, del);
> }
> - if (!f2fs_test_bit(offset, se->ckpt_valid_map))
> - se->ckpt_valid_blocks += del;
>
> __mark_sit_entry_dirty(sbi, segno);
>
> --
> 2.25.1
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-12-20 21:22 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-04 3:45 [PATCH v3 0/5] Speed up f2fs truncate Yi Sun
2024-11-04 3:45 ` [PATCH v3 1/5] f2fs: expand f2fs_invalidate_compress_page() to f2fs_invalidate_compress_pages_range() Yi Sun
2024-11-04 3:45 ` [PATCH v3 2/5] f2fs: add parameter @len to f2fs_invalidate_internal_cache() Yi Sun
2024-11-04 3:45 ` [PATCH v3 3/5] f2fs: introduce update_sit_entry_for_release() Yi Sun
2024-12-20 21:22 ` Jaegeuk Kim
2024-11-04 3:45 ` [PATCH v3 4/5] f2fs: add parameter @len to f2fs_invalidate_blocks() Yi Sun
2024-11-04 3:45 ` [PATCH v3 5/5] f2fs: Optimize f2fs_truncate_data_blocks_range() Yi Sun
2024-12-11 3:08 ` yi sun
2024-12-12 16:22 ` Jaegeuk Kim
2024-12-18 8:00 ` yi sun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox