* [RFC PATCH 0/2] Speed up f2fs truncate
@ 2024-10-16 5:27 Yi Sun
2024-10-16 5:27 ` [RFC PATCH 1/2] f2fs: introduce update_sit_entry_for_release() Yi Sun
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Yi Sun @ 2024-10-16 5:27 UTC (permalink / raw)
To: chao
Cc: jaegeuk, linux-f2fs-devel, linux-kernel, yi.sun, sunyibuaa,
niuzhiguo84, hao_hao.wang, ke.wang
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().
If some blocks are continuous and belong to the same segment,
we can process these blocks at the same time. This can reduce
the number of calls to the down_write() and the up_write(),
thereby improving the overall speed of doing truncate.
Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt
Time Comparison of rm:
original optimization ratio
7.17s 3.27s 54.39%
Hi, currently I have only optimized the f2fs doing truncate route,
and other functions using f2fs_invalidate_blocks() are not taken
into consideration. So new function
f2fs_invalidate_compress_pages_range() and
check_f2fs_invalidate_consecutive_blocks() are not general functions.
Is this modification acceptable?
Yi Sun (2):
f2fs: introduce update_sit_entry_for_release()
f2fs: introduce f2fs_invalidate_consecutive_blocks() for truncate
fs/f2fs/compress.c | 14 ++++++
fs/f2fs/f2fs.h | 5 ++
fs/f2fs/file.c | 34 ++++++++++++-
fs/f2fs/segment.c | 116 +++++++++++++++++++++++++++++++--------------
4 files changed, 133 insertions(+), 36 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 1/2] f2fs: introduce update_sit_entry_for_release()
2024-10-16 5:27 [RFC PATCH 0/2] Speed up f2fs truncate Yi Sun
@ 2024-10-16 5:27 ` Yi Sun
2024-10-16 5:27 ` [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks() Yi Sun
2024-10-28 17:40 ` [f2fs-dev] [RFC PATCH 0/2] Speed up f2fs truncate patchwork-bot+f2fs
2 siblings, 0 replies; 9+ messages in thread
From: Yi Sun @ 2024-10-16 5:27 UTC (permalink / raw)
To: chao
Cc: jaegeuk, linux-f2fs-devel, linux-kernel, yi.sun, sunyibuaa,
niuzhiguo84, hao_hao.wang, ke.wang
This function can process some consecutive blocks at a time.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
fs/f2fs/segment.c | 91 +++++++++++++++++++++++++++++------------------
1 file changed, 56 insertions(+), 35 deletions(-)
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index d91fbd1b27ba..f118faf36d35 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -2424,6 +2424,58 @@ static void update_segment_mtime(struct f2fs_sb_info *sbi, block_t blkaddr,
SIT_I(sbi)->max_mtime = ctime;
}
+static int update_sit_entry_for_release(struct f2fs_sb_info *sbi, struct seg_entry *se,
+ block_t blkaddr, unsigned int offset, int del)
+{
+ bool exist;
+#ifdef CONFIG_F2FS_CHECK_FS
+ bool mir_exist;
+#endif
+ int i;
+ int del_count = -del;
+
+ for (i = 0; i < del_count; i++) {
+ exist = f2fs_test_and_clear_bit(offset + i, se->cur_valid_map);
+#ifdef CONFIG_F2FS_CHECK_FS
+ mir_exist = f2fs_test_and_clear_bit(offset + i,
+ se->cur_valid_map_mir);
+ if (unlikely(exist != mir_exist)) {
+ f2fs_err(sbi, "Inconsistent error when clearing bitmap, blk:%u, old bit:%d",
+ blkaddr + i, exist);
+ f2fs_bug_on(sbi, 1);
+ }
+#endif
+ if (unlikely(!exist)) {
+ f2fs_err(sbi, "Bitmap was wrongly cleared, blk:%u",
+ blkaddr + i);
+ f2fs_bug_on(sbi, 1);
+ se->valid_blocks++;
+ del += 1;
+ } else if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
+ /*
+ * If checkpoints are off, we must not reuse data that
+ * was used in the previous checkpoint. If it was used
+ * before, we must track that to know how much space we
+ * really have.
+ */
+ if (f2fs_test_bit(offset + i, se->ckpt_valid_map)) {
+ spin_lock(&sbi->stat_lock);
+ sbi->unusable_block_count++;
+ spin_unlock(&sbi->stat_lock);
+ }
+ }
+
+ if (f2fs_block_unit_discard(sbi) &&
+ f2fs_test_and_clear_bit(offset + i, se->discard_map))
+ sbi->discard_blks++;
+
+ if (!f2fs_test_bit(offset + i, se->ckpt_valid_map))
+ se->ckpt_valid_blocks -= 1;
+ }
+
+ return del;
+}
+
static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
{
struct seg_entry *se;
@@ -2479,43 +2531,12 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map))
se->ckpt_valid_blocks++;
}
- } else {
- exist = f2fs_test_and_clear_bit(offset, se->cur_valid_map);
-#ifdef CONFIG_F2FS_CHECK_FS
- mir_exist = f2fs_test_and_clear_bit(offset,
- se->cur_valid_map_mir);
- if (unlikely(exist != mir_exist)) {
- f2fs_err(sbi, "Inconsistent error when clearing bitmap, blk:%u, old bit:%d",
- blkaddr, exist);
- f2fs_bug_on(sbi, 1);
- }
-#endif
- if (unlikely(!exist)) {
- f2fs_err(sbi, "Bitmap was wrongly cleared, blk:%u",
- blkaddr);
- f2fs_bug_on(sbi, 1);
- se->valid_blocks++;
- del = 0;
- } else if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
- /*
- * If checkpoints are off, we must not reuse data that
- * was used in the previous checkpoint. If it was used
- * before, we must track that to know how much space we
- * really have.
- */
- if (f2fs_test_bit(offset, se->ckpt_valid_map)) {
- spin_lock(&sbi->stat_lock);
- sbi->unusable_block_count++;
- spin_unlock(&sbi->stat_lock);
- }
- }
- if (f2fs_block_unit_discard(sbi) &&
- f2fs_test_and_clear_bit(offset, se->discard_map))
- sbi->discard_blks++;
+ if (!f2fs_test_bit(offset, se->ckpt_valid_map))
+ se->ckpt_valid_blocks += del;
+ } else {
+ del = update_sit_entry_for_release(sbi, se, blkaddr, offset, del);
}
- if (!f2fs_test_bit(offset, se->ckpt_valid_map))
- se->ckpt_valid_blocks += del;
__mark_sit_entry_dirty(sbi, segno);
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
2024-10-16 5:27 [RFC PATCH 0/2] Speed up f2fs truncate Yi Sun
2024-10-16 5:27 ` [RFC PATCH 1/2] f2fs: introduce update_sit_entry_for_release() Yi Sun
@ 2024-10-16 5:27 ` Yi Sun
2024-10-16 16:04 ` Jaegeuk Kim
2024-10-17 1:40 ` Chao Yu
2024-10-28 17:40 ` [f2fs-dev] [RFC PATCH 0/2] Speed up f2fs truncate patchwork-bot+f2fs
2 siblings, 2 replies; 9+ messages in thread
From: Yi Sun @ 2024-10-16 5:27 UTC (permalink / raw)
To: chao
Cc: jaegeuk, linux-f2fs-devel, linux-kernel, yi.sun, sunyibuaa,
niuzhiguo84, hao_hao.wang, ke.wang
When doing truncate, consecutive blocks in the same segment
can be processed at the same time. So that the efficiency of
doing truncate can be improved.
Add f2fs_invalidate_compress_pages_range() only for doing truncate.
Add check_f2fs_invalidate_consecutive_blocks() only for doing
truncate and to determine whether the blocks are continuous and
belong to the same segment.
Signed-off-by: Yi Sun <yi.sun@unisoc.com>
---
fs/f2fs/compress.c | 14 ++++++++++++++
fs/f2fs/f2fs.h | 5 +++++
fs/f2fs/file.c | 34 +++++++++++++++++++++++++++++++++-
fs/f2fs/segment.c | 25 +++++++++++++++++++++++++
4 files changed, 77 insertions(+), 1 deletion(-)
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 7f26440e8595..70929a87e9bf 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -2014,6 +2014,20 @@ void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
} while (index < end);
}
+void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
+ block_t blkaddr, int cnt)
+{
+ if (!sbi->compress_inode)
+ return;
+
+ if (cnt < 1) {
+ f2fs_bug_on(sbi, 1);
+ cnt = 1;
+ }
+
+ invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + cnt - 1);
+}
+
int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
{
struct inode *inode;
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index ce00cb546f4a..99767f35678f 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3716,6 +3716,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
+void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt);
bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
@@ -4375,6 +4376,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
block_t blkaddr);
void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
+void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
+ block_t blkaddr, int cnt);
#define inc_compr_inode_stat(inode) \
do { \
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
@@ -4432,6 +4435,8 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
struct page *page, block_t blkaddr) { return false; }
static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
nid_t ino) { }
+static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
+ block_t blkaddr, int cnt) { }
#define inc_compr_inode_stat(inode) do { } while (0)
static inline int f2fs_is_compressed_cluster(
struct inode *inode,
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 7057efa8ec17..634691e3b5f1 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -612,6 +612,18 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
return finish_preallocate_blocks(inode);
}
+static bool check_f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi,
+ block_t blkaddr1, block_t blkaddr2)
+{
+ if (blkaddr2 - blkaddr1 != 1)
+ return false;
+
+ if (GET_SEGNO(sbi, blkaddr1) != GET_SEGNO(sbi, blkaddr2))
+ return false;
+
+ return true;
+}
+
void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
@@ -621,6 +633,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
int cluster_index = 0, valid_blocks = 0;
int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
+ block_t con_start;
+ bool run_invalid = true;
+ int con_cnt = 1;
addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
@@ -652,7 +667,24 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
valid_blocks++;
}
- f2fs_invalidate_blocks(sbi, blkaddr);
+ if (run_invalid)
+ con_start = blkaddr;
+
+ if (count > 1 &&
+ check_f2fs_invalidate_consecutive_blocks(sbi, blkaddr,
+ le32_to_cpu(*(addr + 1)))) {
+ run_invalid = false;
+
+ if (con_cnt++ == 1)
+ con_start = blkaddr;
+ } else {
+ run_invalid = true;
+ }
+
+ if (run_invalid) {
+ f2fs_invalidate_consecutive_blocks(sbi, con_start, con_cnt);
+ con_cnt = 1;
+ }
if (!released || blkaddr != COMPRESS_ADDR)
nr_free++;
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index f118faf36d35..edb8a78985ba 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -2570,6 +2570,31 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
up_write(&sit_i->sentry_lock);
}
+void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
+{
+ unsigned int segno = GET_SEGNO(sbi, addr);
+ unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
+ struct sit_info *sit_i = SIT_I(sbi);
+
+ f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
+ if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
+ return;
+
+ f2fs_truncate_meta_inode_pages(sbi, addr, cnt);
+ f2fs_invalidate_compress_pages_range(sbi, addr, cnt);
+
+ /* add it into sit main buffer */
+ down_write(&sit_i->sentry_lock);
+
+ update_segment_mtime(sbi, addr, 0);
+ update_sit_entry(sbi, addr, -cnt);
+
+ /* add it into dirty seglist */
+ locate_dirty_segment(sbi, segno);
+
+ up_write(&sit_i->sentry_lock);
+}
+
bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
{
struct sit_info *sit_i = SIT_I(sbi);
--
2.25.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
2024-10-16 5:27 ` [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks() Yi Sun
@ 2024-10-16 16:04 ` Jaegeuk Kim
2024-10-17 1:40 ` Chao Yu
1 sibling, 0 replies; 9+ messages in thread
From: Jaegeuk Kim @ 2024-10-16 16:04 UTC (permalink / raw)
To: Yi Sun
Cc: chao, linux-f2fs-devel, linux-kernel, sunyibuaa, niuzhiguo84,
hao_hao.wang, ke.wang
2573 void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
2574 {
2575 unsigned int segno = GET_SEGNO(sbi, addr);
2576 unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
2577 struct sit_info *sit_i = SIT_I(sbi);
2578
2579 f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
This hits a panic here while running fsstress.
On 10/16, Yi Sun wrote:
> When doing truncate, consecutive blocks in the same segment
> can be processed at the same time. So that the efficiency of
> doing truncate can be improved.
>
> Add f2fs_invalidate_compress_pages_range() only for doing truncate.
> Add check_f2fs_invalidate_consecutive_blocks() only for doing
> truncate and to determine whether the blocks are continuous and
> belong to the same segment.
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> ---
> fs/f2fs/compress.c | 14 ++++++++++++++
> fs/f2fs/f2fs.h | 5 +++++
> fs/f2fs/file.c | 34 +++++++++++++++++++++++++++++++++-
> fs/f2fs/segment.c | 25 +++++++++++++++++++++++++
> 4 files changed, 77 insertions(+), 1 deletion(-)
>
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index 7f26440e8595..70929a87e9bf 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -2014,6 +2014,20 @@ void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
> } while (index < end);
> }
>
> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> + block_t blkaddr, int cnt)
> +{
> + if (!sbi->compress_inode)
> + return;
> +
> + if (cnt < 1) {
> + f2fs_bug_on(sbi, 1);
> + cnt = 1;
> + }
> +
> + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + cnt - 1);
> +}
> +
> int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
> {
> struct inode *inode;
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index ce00cb546f4a..99767f35678f 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -3716,6 +3716,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
> int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
> void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt);
> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
> int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
> void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
> @@ -4375,6 +4376,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> block_t blkaddr);
> void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> + block_t blkaddr, int cnt);
> #define inc_compr_inode_stat(inode) \
> do { \
> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
> @@ -4432,6 +4435,8 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
> struct page *page, block_t blkaddr) { return false; }
> static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
> nid_t ino) { }
> +static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> + block_t blkaddr, int cnt) { }
> #define inc_compr_inode_stat(inode) do { } while (0)
> static inline int f2fs_is_compressed_cluster(
> struct inode *inode,
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index 7057efa8ec17..634691e3b5f1 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -612,6 +612,18 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> return finish_preallocate_blocks(inode);
> }
>
> +static bool check_f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi,
> + block_t blkaddr1, block_t blkaddr2)
> +{
> + if (blkaddr2 - blkaddr1 != 1)
> + return false;
> +
> + if (GET_SEGNO(sbi, blkaddr1) != GET_SEGNO(sbi, blkaddr2))
> + return false;
> +
> + return true;
> +}
> +
> void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> {
> struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> @@ -621,6 +633,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> int cluster_index = 0, valid_blocks = 0;
> int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> + block_t con_start;
> + bool run_invalid = true;
> + int con_cnt = 1;
>
> addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
>
> @@ -652,7 +667,24 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> valid_blocks++;
> }
>
> - f2fs_invalidate_blocks(sbi, blkaddr);
> + if (run_invalid)
> + con_start = blkaddr;
> +
> + if (count > 1 &&
> + check_f2fs_invalidate_consecutive_blocks(sbi, blkaddr,
> + le32_to_cpu(*(addr + 1)))) {
> + run_invalid = false;
> +
> + if (con_cnt++ == 1)
> + con_start = blkaddr;
> + } else {
> + run_invalid = true;
> + }
> +
> + if (run_invalid) {
> + f2fs_invalidate_consecutive_blocks(sbi, con_start, con_cnt);
> + con_cnt = 1;
> + }
>
> if (!released || blkaddr != COMPRESS_ADDR)
> nr_free++;
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index f118faf36d35..edb8a78985ba 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -2570,6 +2570,31 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
> up_write(&sit_i->sentry_lock);
> }
>
> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
> +{
> + unsigned int segno = GET_SEGNO(sbi, addr);
> + unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
> + struct sit_info *sit_i = SIT_I(sbi);
> +
> + f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
> + if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
> + return;
> +
> + f2fs_truncate_meta_inode_pages(sbi, addr, cnt);
> + f2fs_invalidate_compress_pages_range(sbi, addr, cnt);
> +
> + /* add it into sit main buffer */
> + down_write(&sit_i->sentry_lock);
> +
> + update_segment_mtime(sbi, addr, 0);
> + update_sit_entry(sbi, addr, -cnt);
> +
> + /* add it into dirty seglist */
> + locate_dirty_segment(sbi, segno);
> +
> + up_write(&sit_i->sentry_lock);
> +}
> +
> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
> {
> struct sit_info *sit_i = SIT_I(sbi);
> --
> 2.25.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
2024-10-16 5:27 ` [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks() Yi Sun
2024-10-16 16:04 ` Jaegeuk Kim
@ 2024-10-17 1:40 ` Chao Yu
2024-10-24 9:54 ` yi sun
1 sibling, 1 reply; 9+ messages in thread
From: Chao Yu @ 2024-10-17 1:40 UTC (permalink / raw)
To: Yi Sun
Cc: Chao Yu, jaegeuk, linux-f2fs-devel, linux-kernel, sunyibuaa,
niuzhiguo84, hao_hao.wang, ke.wang
On 2024/10/16 13:27, Yi Sun wrote:
> When doing truncate, consecutive blocks in the same segment
> can be processed at the same time. So that the efficiency of
> doing truncate can be improved.
>
> Add f2fs_invalidate_compress_pages_range() only for doing truncate.
> Add check_f2fs_invalidate_consecutive_blocks() only for doing
> truncate and to determine whether the blocks are continuous and
> belong to the same segment.
>
> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> ---
> fs/f2fs/compress.c | 14 ++++++++++++++
> fs/f2fs/f2fs.h | 5 +++++
> fs/f2fs/file.c | 34 +++++++++++++++++++++++++++++++++-
> fs/f2fs/segment.c | 25 +++++++++++++++++++++++++
> 4 files changed, 77 insertions(+), 1 deletion(-)
>
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index 7f26440e8595..70929a87e9bf 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -2014,6 +2014,20 @@ void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
> } while (index < end);
> }
>
> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> + block_t blkaddr, int cnt)
> +{
> + if (!sbi->compress_inode)
> + return;
> +
> + if (cnt < 1) {
> + f2fs_bug_on(sbi, 1);
> + cnt = 1;
> + }
> +
> + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + cnt - 1);
> +}
> +
> int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
> {
> struct inode *inode;
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index ce00cb546f4a..99767f35678f 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -3716,6 +3716,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
> int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
> void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt);
> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
> int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
> void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
> @@ -4375,6 +4376,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> block_t blkaddr);
> void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> + block_t blkaddr, int cnt);
> #define inc_compr_inode_stat(inode) \
> do { \
> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
> @@ -4432,6 +4435,8 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
> struct page *page, block_t blkaddr) { return false; }
> static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
> nid_t ino) { }
> +static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> + block_t blkaddr, int cnt) { }
> #define inc_compr_inode_stat(inode) do { } while (0)
> static inline int f2fs_is_compressed_cluster(
> struct inode *inode,
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index 7057efa8ec17..634691e3b5f1 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -612,6 +612,18 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> return finish_preallocate_blocks(inode);
> }
>
> +static bool check_f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi,
> + block_t blkaddr1, block_t blkaddr2)
> +{
> + if (blkaddr2 - blkaddr1 != 1)
> + return false;
> +
> + if (GET_SEGNO(sbi, blkaddr1) != GET_SEGNO(sbi, blkaddr2))
> + return false;
> +
> + return true;
> +}
> +
> void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> {
> struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> @@ -621,6 +633,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> int cluster_index = 0, valid_blocks = 0;
> int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> + block_t con_start;
> + bool run_invalid = true;
> + int con_cnt = 1;
>
> addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
>
> @@ -652,7 +667,24 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> valid_blocks++;
> }
>
> - f2fs_invalidate_blocks(sbi, blkaddr);
> + if (run_invalid)
> + con_start = blkaddr;
> +
> + if (count > 1 &&
> + check_f2fs_invalidate_consecutive_blocks(sbi, blkaddr,
> + le32_to_cpu(*(addr + 1)))) {
> + run_invalid = false;
> +
> + if (con_cnt++ == 1)
> + con_start = blkaddr;
> + } else {
> + run_invalid = true;
> + }
> +
> + if (run_invalid) {
> + f2fs_invalidate_consecutive_blocks(sbi, con_start, con_cnt);
> + con_cnt = 1;
> + }
>
> if (!released || blkaddr != COMPRESS_ADDR)
> nr_free++;
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index f118faf36d35..edb8a78985ba 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -2570,6 +2570,31 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
> up_write(&sit_i->sentry_lock);
> }
>
> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
> +{
> + unsigned int segno = GET_SEGNO(sbi, addr);
> + unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
> + struct sit_info *sit_i = SIT_I(sbi);
> +
> + f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
> + if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
> + return;
> +
> + f2fs_truncate_meta_inode_pages(sbi, addr, cnt);
> + f2fs_invalidate_compress_pages_range(sbi, addr, cnt);
> +
> + /* add it into sit main buffer */
> + down_write(&sit_i->sentry_lock);
> +
> + update_segment_mtime(sbi, addr, 0);
> + update_sit_entry(sbi, addr, -cnt);
> +
> + /* add it into dirty seglist */
> + locate_dirty_segment(sbi, segno);
> +
> + up_write(&sit_i->sentry_lock);
I think it needs to clean up this patchset, what about expanding
f2fs_invalidate_blocks() to support invalidating block address extent?
Something like this:
void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t blkaddr,
unsigned int len)
{
struct sit_info *sit_i = SIT_I(sbi);
int i;
/* TODO: do sanity check on blkaddr extent */
down_write(&sit_i->sentry_lock);
/* TODO: expand f2fs_invalidate_internal_cache() to invalidate blkaddr extent */
f2fs_invalidate_internal_cache(sbi, blkaddr, len);
for (i = 0; i < len; i++) {
update_segment_mtime(sbi, blkaddr, 0);
update_sit_entry(sbi, blkaddr, -1);
/* add it into dirty seglist */
locate_dirty_segment(sbi, GET_SEGNO(sbi, blkaddr));
}
up_write(&sit_i->sentry_lock);
}
Thanks,
> +}
> +
> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
> {
> struct sit_info *sit_i = SIT_I(sbi);
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
2024-10-17 1:40 ` Chao Yu
@ 2024-10-24 9:54 ` yi sun
2024-10-24 10:26 ` Chao Yu
0 siblings, 1 reply; 9+ messages in thread
From: yi sun @ 2024-10-24 9:54 UTC (permalink / raw)
To: Chao Yu
Cc: Yi Sun, jaegeuk, linux-f2fs-devel, yi sun, linux-kernel,
niuzhiguo84, hao_hao.wang, ke.wang
On Thu, Oct 17, 2024 at 9:40 AM Chao Yu <chao@kernel.org> wrote:
>
> On 2024/10/16 13:27, Yi Sun wrote:
> > When doing truncate, consecutive blocks in the same segment
> > can be processed at the same time. So that the efficiency of
> > doing truncate can be improved.
> >
> > Add f2fs_invalidate_compress_pages_range() only for doing truncate.
> > Add check_f2fs_invalidate_consecutive_blocks() only for doing
> > truncate and to determine whether the blocks are continuous and
> > belong to the same segment.
> >
> > Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> > ---
> > fs/f2fs/compress.c | 14 ++++++++++++++
> > fs/f2fs/f2fs.h | 5 +++++
> > fs/f2fs/file.c | 34 +++++++++++++++++++++++++++++++++-
> > fs/f2fs/segment.c | 25 +++++++++++++++++++++++++
> > 4 files changed, 77 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> > index 7f26440e8595..70929a87e9bf 100644
> > --- a/fs/f2fs/compress.c
> > +++ b/fs/f2fs/compress.c
> > @@ -2014,6 +2014,20 @@ void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
> > } while (index < end);
> > }
> >
> > +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> > + block_t blkaddr, int cnt)
> > +{
> > + if (!sbi->compress_inode)
> > + return;
> > +
> > + if (cnt < 1) {
> > + f2fs_bug_on(sbi, 1);
> > + cnt = 1;
> > + }
> > +
> > + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + cnt - 1);
> > +}
> > +
> > int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
> > {
> > struct inode *inode;
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index ce00cb546f4a..99767f35678f 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -3716,6 +3716,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
> > int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
> > void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
> > void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
> > +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt);
> > bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
> > int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
> > void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
> > @@ -4375,6 +4376,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> > bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> > block_t blkaddr);
> > void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
> > +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> > + block_t blkaddr, int cnt);
> > #define inc_compr_inode_stat(inode) \
> > do { \
> > struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
> > @@ -4432,6 +4435,8 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
> > struct page *page, block_t blkaddr) { return false; }
> > static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
> > nid_t ino) { }
> > +static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> > + block_t blkaddr, int cnt) { }
> > #define inc_compr_inode_stat(inode) do { } while (0)
> > static inline int f2fs_is_compressed_cluster(
> > struct inode *inode,
> > diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> > index 7057efa8ec17..634691e3b5f1 100644
> > --- a/fs/f2fs/file.c
> > +++ b/fs/f2fs/file.c
> > @@ -612,6 +612,18 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> > return finish_preallocate_blocks(inode);
> > }
> >
> > +static bool check_f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi,
> > + block_t blkaddr1, block_t blkaddr2)
> > +{
> > + if (blkaddr2 - blkaddr1 != 1)
> > + return false;
> > +
> > + if (GET_SEGNO(sbi, blkaddr1) != GET_SEGNO(sbi, blkaddr2))
> > + return false;
> > +
> > + return true;
> > +}
> > +
> > void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > {
> > struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> > @@ -621,6 +633,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > int cluster_index = 0, valid_blocks = 0;
> > int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> > bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> > + block_t con_start;
> > + bool run_invalid = true;
> > + int con_cnt = 1;
> >
> > addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
> >
> > @@ -652,7 +667,24 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> > valid_blocks++;
> > }
> >
> > - f2fs_invalidate_blocks(sbi, blkaddr);
> > + if (run_invalid)
> > + con_start = blkaddr;
> > +
> > + if (count > 1 &&
> > + check_f2fs_invalidate_consecutive_blocks(sbi, blkaddr,
> > + le32_to_cpu(*(addr + 1)))) {
> > + run_invalid = false;
> > +
> > + if (con_cnt++ == 1)
> > + con_start = blkaddr;
> > + } else {
> > + run_invalid = true;
> > + }
> > +
> > + if (run_invalid) {
> > + f2fs_invalidate_consecutive_blocks(sbi, con_start, con_cnt);
> > + con_cnt = 1;
> > + }
> >
> > if (!released || blkaddr != COMPRESS_ADDR)
> > nr_free++;
> > diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> > index f118faf36d35..edb8a78985ba 100644
> > --- a/fs/f2fs/segment.c
> > +++ b/fs/f2fs/segment.c
> > @@ -2570,6 +2570,31 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
> > up_write(&sit_i->sentry_lock);
> > }
> >
> > +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
> > +{
> > + unsigned int segno = GET_SEGNO(sbi, addr);
> > + unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
> > + struct sit_info *sit_i = SIT_I(sbi);
> > +
> > + f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
> > + if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
> > + return;
> > +
> > + f2fs_truncate_meta_inode_pages(sbi, addr, cnt);
> > + f2fs_invalidate_compress_pages_range(sbi, addr, cnt);
> > +
> > + /* add it into sit main buffer */
> > + down_write(&sit_i->sentry_lock);
> > +
> > + update_segment_mtime(sbi, addr, 0);
> > + update_sit_entry(sbi, addr, -cnt);
> > +
> > + /* add it into dirty seglist */
> > + locate_dirty_segment(sbi, segno);
> > +
> > + up_write(&sit_i->sentry_lock);
>
> I think it needs to clean up this patchset, what about expanding
> f2fs_invalidate_blocks() to support invalidating block address extent?
>
> Something like this:
>
> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t blkaddr,
> unsigned int len)
> {
> struct sit_info *sit_i = SIT_I(sbi);
> int i;
>
> /* TODO: do sanity check on blkaddr extent */
>
> down_write(&sit_i->sentry_lock);
>
> /* TODO: expand f2fs_invalidate_internal_cache() to invalidate blkaddr extent */
> f2fs_invalidate_internal_cache(sbi, blkaddr, len);
>
> for (i = 0; i < len; i++) {
> update_segment_mtime(sbi, blkaddr, 0);
> update_sit_entry(sbi, blkaddr, -1);
>
> /* add it into dirty seglist */
> locate_dirty_segment(sbi, GET_SEGNO(sbi, blkaddr));
> }
>
> up_write(&sit_i->sentry_lock);
> }
>
> Thanks,
>
Hi Chao,
The code structure you proposed is very good and very clear.
I retested using this code structure and found that the speed
of doing truncate also improved, but the improvement was smaller.
So it might be better to use the following code structure.
void f2fs_invalidate_blocks(... , len)
{
down_write();
// Process in segments instead of blocks.
for (i = 0; i < segment_num; i++) {
update_segment_mtime();
update_sit_entry();
/* add it into dirty seglist */
locate_dirty_segment();
}
up_write();
}
Time Comparison of rm:
original optimization(segment unit) ratio(segment unit)
7.17s 3.27s 54.39%
optimization(block unit) ratio(block unit)
5.12s 28.6%
New patches will be sent out by email after they are sorted out.
Thank you for your valuable suggestions.
> > +}
> > +
> > bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
> > {
> > struct sit_info *sit_i = SIT_I(sbi);
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
2024-10-24 9:54 ` yi sun
@ 2024-10-24 10:26 ` Chao Yu
2024-10-29 3:02 ` yi sun
0 siblings, 1 reply; 9+ messages in thread
From: Chao Yu @ 2024-10-24 10:26 UTC (permalink / raw)
To: yi sun
Cc: Chao Yu, Yi Sun, jaegeuk, linux-f2fs-devel, linux-kernel,
niuzhiguo84, hao_hao.wang, ke.wang
On 2024/10/24 17:54, yi sun wrote:
> On Thu, Oct 17, 2024 at 9:40 AM Chao Yu <chao@kernel.org> wrote:
>>
>> On 2024/10/16 13:27, Yi Sun wrote:
>>> When doing truncate, consecutive blocks in the same segment
>>> can be processed at the same time. So that the efficiency of
>>> doing truncate can be improved.
>>>
>>> Add f2fs_invalidate_compress_pages_range() only for doing truncate.
>>> Add check_f2fs_invalidate_consecutive_blocks() only for doing
>>> truncate and to determine whether the blocks are continuous and
>>> belong to the same segment.
>>>
>>> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
>>> ---
>>> fs/f2fs/compress.c | 14 ++++++++++++++
>>> fs/f2fs/f2fs.h | 5 +++++
>>> fs/f2fs/file.c | 34 +++++++++++++++++++++++++++++++++-
>>> fs/f2fs/segment.c | 25 +++++++++++++++++++++++++
>>> 4 files changed, 77 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
>>> index 7f26440e8595..70929a87e9bf 100644
>>> --- a/fs/f2fs/compress.c
>>> +++ b/fs/f2fs/compress.c
>>> @@ -2014,6 +2014,20 @@ void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
>>> } while (index < end);
>>> }
>>>
>>> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
>>> + block_t blkaddr, int cnt)
>>> +{
>>> + if (!sbi->compress_inode)
>>> + return;
>>> +
>>> + if (cnt < 1) {
>>> + f2fs_bug_on(sbi, 1);
>>> + cnt = 1;
>>> + }
>>> +
>>> + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + cnt - 1);
>>> +}
>>> +
>>> int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
>>> {
>>> struct inode *inode;
>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>> index ce00cb546f4a..99767f35678f 100644
>>> --- a/fs/f2fs/f2fs.h
>>> +++ b/fs/f2fs/f2fs.h
>>> @@ -3716,6 +3716,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
>>> int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
>>> void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
>>> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
>>> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt);
>>> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
>>> int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
>>> void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
>>> @@ -4375,6 +4376,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
>>> bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
>>> block_t blkaddr);
>>> void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
>>> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
>>> + block_t blkaddr, int cnt);
>>> #define inc_compr_inode_stat(inode) \
>>> do { \
>>> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
>>> @@ -4432,6 +4435,8 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
>>> struct page *page, block_t blkaddr) { return false; }
>>> static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
>>> nid_t ino) { }
>>> +static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
>>> + block_t blkaddr, int cnt) { }
>>> #define inc_compr_inode_stat(inode) do { } while (0)
>>> static inline int f2fs_is_compressed_cluster(
>>> struct inode *inode,
>>> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
>>> index 7057efa8ec17..634691e3b5f1 100644
>>> --- a/fs/f2fs/file.c
>>> +++ b/fs/f2fs/file.c
>>> @@ -612,6 +612,18 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
>>> return finish_preallocate_blocks(inode);
>>> }
>>>
>>> +static bool check_f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi,
>>> + block_t blkaddr1, block_t blkaddr2)
>>> +{
>>> + if (blkaddr2 - blkaddr1 != 1)
>>> + return false;
>>> +
>>> + if (GET_SEGNO(sbi, blkaddr1) != GET_SEGNO(sbi, blkaddr2))
>>> + return false;
>>> +
>>> + return true;
>>> +}
>>> +
>>> void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
>>> {
>>> struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
>>> @@ -621,6 +633,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
>>> int cluster_index = 0, valid_blocks = 0;
>>> int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
>>> bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
>>> + block_t con_start;
>>> + bool run_invalid = true;
>>> + int con_cnt = 1;
>>>
>>> addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
>>>
>>> @@ -652,7 +667,24 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
>>> valid_blocks++;
>>> }
>>>
>>> - f2fs_invalidate_blocks(sbi, blkaddr);
>>> + if (run_invalid)
>>> + con_start = blkaddr;
>>> +
>>> + if (count > 1 &&
>>> + check_f2fs_invalidate_consecutive_blocks(sbi, blkaddr,
>>> + le32_to_cpu(*(addr + 1)))) {
>>> + run_invalid = false;
>>> +
>>> + if (con_cnt++ == 1)
>>> + con_start = blkaddr;
>>> + } else {
>>> + run_invalid = true;
>>> + }
>>> +
>>> + if (run_invalid) {
>>> + f2fs_invalidate_consecutive_blocks(sbi, con_start, con_cnt);
>>> + con_cnt = 1;
>>> + }
>>>
>>> if (!released || blkaddr != COMPRESS_ADDR)
>>> nr_free++;
>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>>> index f118faf36d35..edb8a78985ba 100644
>>> --- a/fs/f2fs/segment.c
>>> +++ b/fs/f2fs/segment.c
>>> @@ -2570,6 +2570,31 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
>>> up_write(&sit_i->sentry_lock);
>>> }
>>>
>>> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
>>> +{
>>> + unsigned int segno = GET_SEGNO(sbi, addr);
>>> + unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
>>> + struct sit_info *sit_i = SIT_I(sbi);
>>> +
>>> + f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
>>> + if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
>>> + return;
>>> +
>>> + f2fs_truncate_meta_inode_pages(sbi, addr, cnt);
>>> + f2fs_invalidate_compress_pages_range(sbi, addr, cnt);
>>> +
>>> + /* add it into sit main buffer */
>>> + down_write(&sit_i->sentry_lock);
>>> +
>>> + update_segment_mtime(sbi, addr, 0);
>>> + update_sit_entry(sbi, addr, -cnt);
>>> +
>>> + /* add it into dirty seglist */
>>> + locate_dirty_segment(sbi, segno);
>>> +
>>> + up_write(&sit_i->sentry_lock);
>>
>> I think it needs to clean up this patchset, what about expanding
>> f2fs_invalidate_blocks() to support invalidating block address extent?
>>
>> Something like this:
>>
>> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t blkaddr,
>> unsigned int len)
>> {
>> struct sit_info *sit_i = SIT_I(sbi);
>> int i;
>>
>> /* TODO: do sanity check on blkaddr extent */
>>
>> down_write(&sit_i->sentry_lock);
>>
>> /* TODO: expand f2fs_invalidate_internal_cache() to invalidate blkaddr extent */
>> f2fs_invalidate_internal_cache(sbi, blkaddr, len);
>>
>> for (i = 0; i < len; i++) {
>> update_segment_mtime(sbi, blkaddr, 0);
>> update_sit_entry(sbi, blkaddr, -1);
>>
>> /* add it into dirty seglist */
>> locate_dirty_segment(sbi, GET_SEGNO(sbi, blkaddr));
>> }
>>
>> up_write(&sit_i->sentry_lock);
>> }
>>
>> Thanks,
>>
>
Hi Yi,
> Hi Chao,
> The code structure you proposed is very good and very clear.
> I retested using this code structure and found that the speed
> of doing truncate also improved, but the improvement was smaller.
>
> So it might be better to use the following code structure.
> void f2fs_invalidate_blocks(... , len)
> {
> down_write();
> // Process in segments instead of blocks.
> for (i = 0; i < segment_num; i++) {
> update_segment_mtime();
> update_sit_entry();
Ah, yes, it can merge more operations and do it w/ segment granularity.
Can you please try:
for (j = start; j < end; j++)
update_sit_entry();
Maybe it can eliminate change in update_sit_entry().
>
> /* add it into dirty seglist */
> locate_dirty_segment();
> }
> up_write();
> }
>
> Time Comparison of rm:
> original optimization(segment unit) ratio(segment unit)
> 7.17s 3.27s 54.39%
> optimization(block unit) ratio(block unit)
> 5.12s 28.6%
Thanks for the test and feedback.
Thanks,
>
> New patches will be sent out by email after they are sorted out.
> Thank you for your valuable suggestions.
>
>>> +}
>>> +
>>> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
>>> {
>>> struct sit_info *sit_i = SIT_I(sbi);
>>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [f2fs-dev] [RFC PATCH 0/2] Speed up f2fs truncate
2024-10-16 5:27 [RFC PATCH 0/2] Speed up f2fs truncate Yi Sun
2024-10-16 5:27 ` [RFC PATCH 1/2] f2fs: introduce update_sit_entry_for_release() Yi Sun
2024-10-16 5:27 ` [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks() Yi Sun
@ 2024-10-28 17:40 ` patchwork-bot+f2fs
2 siblings, 0 replies; 9+ messages in thread
From: patchwork-bot+f2fs @ 2024-10-28 17:40 UTC (permalink / raw)
To: Yi Sun
Cc: chao, ke.wang, linux-kernel, sunyibuaa, jaegeuk, linux-f2fs-devel,
hao_hao.wang
Hello:
This series was applied to jaegeuk/f2fs.git (dev)
by Jaegeuk Kim <jaegeuk@kernel.org>:
On Wed, 16 Oct 2024 13:27:56 +0800 you wrote:
> Deleting large files is time-consuming, and a large part
> of the time is spent in f2fs_invalidate_blocks()
> ->down_write(sit_info->sentry_lock) and up_write().
>
> If some blocks are continuous and belong to the same segment,
> we can process these blocks at the same time. This can reduce
> the number of calls to the down_write() and the up_write(),
> thereby improving the overall speed of doing truncate.
>
> [...]
Here is the summary with links:
- [f2fs-dev,RFC,1/2] f2fs: introduce update_sit_entry_for_release()
https://git.kernel.org/jaegeuk/f2fs/c/af68d9b481ac
- [f2fs-dev,RFC,2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
(no matching commit)
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks()
2024-10-24 10:26 ` Chao Yu
@ 2024-10-29 3:02 ` yi sun
0 siblings, 0 replies; 9+ messages in thread
From: yi sun @ 2024-10-29 3:02 UTC (permalink / raw)
To: Chao Yu
Cc: Yi Sun, jaegeuk, linux-f2fs-devel, linux-kernel, niuzhiguo84,
hao_hao.wang, ke.wang
On Thu, Oct 24, 2024 at 6:26 PM Chao Yu <chao@kernel.org> wrote:
>
> On 2024/10/24 17:54, yi sun wrote:
> > On Thu, Oct 17, 2024 at 9:40 AM Chao Yu <chao@kernel.org> wrote:
> >>
> >> On 2024/10/16 13:27, Yi Sun wrote:
> >>> When doing truncate, consecutive blocks in the same segment
> >>> can be processed at the same time. So that the efficiency of
> >>> doing truncate can be improved.
> >>>
> >>> Add f2fs_invalidate_compress_pages_range() only for doing truncate.
> >>> Add check_f2fs_invalidate_consecutive_blocks() only for doing
> >>> truncate and to determine whether the blocks are continuous and
> >>> belong to the same segment.
> >>>
> >>> Signed-off-by: Yi Sun <yi.sun@unisoc.com>
> >>> ---
> >>> fs/f2fs/compress.c | 14 ++++++++++++++
> >>> fs/f2fs/f2fs.h | 5 +++++
> >>> fs/f2fs/file.c | 34 +++++++++++++++++++++++++++++++++-
> >>> fs/f2fs/segment.c | 25 +++++++++++++++++++++++++
> >>> 4 files changed, 77 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> >>> index 7f26440e8595..70929a87e9bf 100644
> >>> --- a/fs/f2fs/compress.c
> >>> +++ b/fs/f2fs/compress.c
> >>> @@ -2014,6 +2014,20 @@ void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino)
> >>> } while (index < end);
> >>> }
> >>>
> >>> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> >>> + block_t blkaddr, int cnt)
> >>> +{
> >>> + if (!sbi->compress_inode)
> >>> + return;
> >>> +
> >>> + if (cnt < 1) {
> >>> + f2fs_bug_on(sbi, 1);
> >>> + cnt = 1;
> >>> + }
> >>> +
> >>> + invalidate_mapping_pages(COMPRESS_MAPPING(sbi), blkaddr, blkaddr + cnt - 1);
> >>> +}
> >>> +
> >>> int f2fs_init_compress_inode(struct f2fs_sb_info *sbi)
> >>> {
> >>> struct inode *inode;
> >>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> >>> index ce00cb546f4a..99767f35678f 100644
> >>> --- a/fs/f2fs/f2fs.h
> >>> +++ b/fs/f2fs/f2fs.h
> >>> @@ -3716,6 +3716,7 @@ int f2fs_create_flush_cmd_control(struct f2fs_sb_info *sbi);
> >>> int f2fs_flush_device_cache(struct f2fs_sb_info *sbi);
> >>> void f2fs_destroy_flush_cmd_control(struct f2fs_sb_info *sbi, bool free);
> >>> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr);
> >>> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt);
> >>> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr);
> >>> int f2fs_start_discard_thread(struct f2fs_sb_info *sbi);
> >>> void f2fs_drop_discard_cmd(struct f2fs_sb_info *sbi);
> >>> @@ -4375,6 +4376,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> >>> bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
> >>> block_t blkaddr);
> >>> void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino);
> >>> +void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> >>> + block_t blkaddr, int cnt);
> >>> #define inc_compr_inode_stat(inode) \
> >>> do { \
> >>> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); \
> >>> @@ -4432,6 +4435,8 @@ static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi,
> >>> struct page *page, block_t blkaddr) { return false; }
> >>> static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi,
> >>> nid_t ino) { }
> >>> +static inline void f2fs_invalidate_compress_pages_range(struct f2fs_sb_info *sbi,
> >>> + block_t blkaddr, int cnt) { }
> >>> #define inc_compr_inode_stat(inode) do { } while (0)
> >>> static inline int f2fs_is_compressed_cluster(
> >>> struct inode *inode,
> >>> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> >>> index 7057efa8ec17..634691e3b5f1 100644
> >>> --- a/fs/f2fs/file.c
> >>> +++ b/fs/f2fs/file.c
> >>> @@ -612,6 +612,18 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
> >>> return finish_preallocate_blocks(inode);
> >>> }
> >>>
> >>> +static bool check_f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi,
> >>> + block_t blkaddr1, block_t blkaddr2)
> >>> +{
> >>> + if (blkaddr2 - blkaddr1 != 1)
> >>> + return false;
> >>> +
> >>> + if (GET_SEGNO(sbi, blkaddr1) != GET_SEGNO(sbi, blkaddr2))
> >>> + return false;
> >>> +
> >>> + return true;
> >>> +}
> >>> +
> >>> void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> >>> {
> >>> struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
> >>> @@ -621,6 +633,9 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> >>> int cluster_index = 0, valid_blocks = 0;
> >>> int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
> >>> bool released = !atomic_read(&F2FS_I(dn->inode)->i_compr_blocks);
> >>> + block_t con_start;
> >>> + bool run_invalid = true;
> >>> + int con_cnt = 1;
> >>>
> >>> addr = get_dnode_addr(dn->inode, dn->node_page) + ofs;
> >>>
> >>> @@ -652,7 +667,24 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count)
> >>> valid_blocks++;
> >>> }
> >>>
> >>> - f2fs_invalidate_blocks(sbi, blkaddr);
> >>> + if (run_invalid)
> >>> + con_start = blkaddr;
> >>> +
> >>> + if (count > 1 &&
> >>> + check_f2fs_invalidate_consecutive_blocks(sbi, blkaddr,
> >>> + le32_to_cpu(*(addr + 1)))) {
> >>> + run_invalid = false;
> >>> +
> >>> + if (con_cnt++ == 1)
> >>> + con_start = blkaddr;
> >>> + } else {
> >>> + run_invalid = true;
> >>> + }
> >>> +
> >>> + if (run_invalid) {
> >>> + f2fs_invalidate_consecutive_blocks(sbi, con_start, con_cnt);
> >>> + con_cnt = 1;
> >>> + }
> >>>
> >>> if (!released || blkaddr != COMPRESS_ADDR)
> >>> nr_free++;
> >>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> >>> index f118faf36d35..edb8a78985ba 100644
> >>> --- a/fs/f2fs/segment.c
> >>> +++ b/fs/f2fs/segment.c
> >>> @@ -2570,6 +2570,31 @@ void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t addr)
> >>> up_write(&sit_i->sentry_lock);
> >>> }
> >>>
> >>> +void f2fs_invalidate_consecutive_blocks(struct f2fs_sb_info *sbi, block_t addr, int cnt)
> >>> +{
> >>> + unsigned int segno = GET_SEGNO(sbi, addr);
> >>> + unsigned int segno2 = GET_SEGNO(sbi, addr + cnt - 1);
> >>> + struct sit_info *sit_i = SIT_I(sbi);
> >>> +
> >>> + f2fs_bug_on(sbi, addr == NULL_ADDR || segno != segno2);
> >>> + if (addr == NEW_ADDR || addr == COMPRESS_ADDR)
> >>> + return;
> >>> +
> >>> + f2fs_truncate_meta_inode_pages(sbi, addr, cnt);
> >>> + f2fs_invalidate_compress_pages_range(sbi, addr, cnt);
> >>> +
> >>> + /* add it into sit main buffer */
> >>> + down_write(&sit_i->sentry_lock);
> >>> +
> >>> + update_segment_mtime(sbi, addr, 0);
> >>> + update_sit_entry(sbi, addr, -cnt);
> >>> +
> >>> + /* add it into dirty seglist */
> >>> + locate_dirty_segment(sbi, segno);
> >>> +
> >>> + up_write(&sit_i->sentry_lock);
> >>
> >> I think it needs to clean up this patchset, what about expanding
> >> f2fs_invalidate_blocks() to support invalidating block address extent?
> >>
> >> Something like this:
> >>
> >> void f2fs_invalidate_blocks(struct f2fs_sb_info *sbi, block_t blkaddr,
> >> unsigned int len)
> >> {
> >> struct sit_info *sit_i = SIT_I(sbi);
> >> int i;
> >>
> >> /* TODO: do sanity check on blkaddr extent */
> >>
> >> down_write(&sit_i->sentry_lock);
> >>
> >> /* TODO: expand f2fs_invalidate_internal_cache() to invalidate blkaddr extent */
> >> f2fs_invalidate_internal_cache(sbi, blkaddr, len);
> >>
> >> for (i = 0; i < len; i++) {
> >> update_segment_mtime(sbi, blkaddr, 0);
> >> update_sit_entry(sbi, blkaddr, -1);
> >>
> >> /* add it into dirty seglist */
> >> locate_dirty_segment(sbi, GET_SEGNO(sbi, blkaddr));
> >> }
> >>
> >> up_write(&sit_i->sentry_lock);
> >> }
> >>
> >> Thanks,
> >>
> >
>
> Hi Yi,
>
> > Hi Chao,
> > The code structure you proposed is very good and very clear.
> > I retested using this code structure and found that the speed
> > of doing truncate also improved, but the improvement was smaller.
> >
> > So it might be better to use the following code structure.
> > void f2fs_invalidate_blocks(... , len)
> > {
> > down_write();
> > // Process in segments instead of blocks.
> > for (i = 0; i < segment_num; i++) {
> > update_segment_mtime();
> > update_sit_entry();
>
> Ah, yes, it can merge more operations and do it w/ segment granularity.
>
> Can you please try:
>
> for (j = start; j < end; j++)
> update_sit_entry();
>
> Maybe it can eliminate change in update_sit_entry().
>
> >
> > /* add it into dirty seglist */
> > locate_dirty_segment();
> > }
> > up_write();
> > }
> >
> > Time Comparison of rm:
> > original optimization(segment unit) ratio(segment unit)
> > 7.17s 3.27s 54.39%
> > optimization(block unit) ratio(block unit)
> > 5.12s 28.6%
>
> Thanks for the test and feedback.
>
> Thanks,
>
Hi Chao,
I retested like this:
Test1(no change function update_sit_entry):
void f2fs_invalidate_blocks(... , len) {
down_write();
time1 = ktime_get();
for (i = 0; i < segment_num; i++) {
update_segment_mtime();
for() {
update_sit_entry(...,-1);
}
locate_dirty_segment();
}
time2 = ktime_get();
up_write();
}
Test2(change function update_sit_entry):
void f2fs_invalidate_blocks(... , len) {
down_write();
time1 = ktime_get();
for (i = 0; i < segment_num; i++) {
update_segment_mtime();
update_sit_entry();
locate_dirty_segment();
}
time2 = ktime_get();
up_write();
}
Result(the sum of (time2 - time1)):
test1 test2 ratio
963807433 ns 209316903 ns 78.3%
Perhaps it would be more beneficial to allow the update_sit_entry() function
to handle multiple consecutive blocks.
> >
> > New patches will be sent out by email after they are sorted out.
> > Thank you for your valuable suggestions.
> >
> >>> +}
> >>> +
> >>> bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr)
> >>> {
> >>> struct sit_info *sit_i = SIT_I(sbi);
> >>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-10-29 3:02 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-16 5:27 [RFC PATCH 0/2] Speed up f2fs truncate Yi Sun
2024-10-16 5:27 ` [RFC PATCH 1/2] f2fs: introduce update_sit_entry_for_release() Yi Sun
2024-10-16 5:27 ` [RFC PATCH 2/2] f2fs: introduce f2fs_invalidate_consecutive_blocks() Yi Sun
2024-10-16 16:04 ` Jaegeuk Kim
2024-10-17 1:40 ` Chao Yu
2024-10-24 9:54 ` yi sun
2024-10-24 10:26 ` Chao Yu
2024-10-29 3:02 ` yi sun
2024-10-28 17:40 ` [f2fs-dev] [RFC PATCH 0/2] Speed up f2fs truncate patchwork-bot+f2fs
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox