* [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write
@ 2024-08-08 6:05 Qu Wenruo
2024-08-08 6:05 ` [PATCH 1/2] btrfs: introduce extent_map::em_cached member Qu Wenruo
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Qu Wenruo @ 2024-08-08 6:05 UTC (permalink / raw)
To: linux-btrfs
Unlike data read path, which use cached extent map to reduce overhead,
data write path always do the extent map lookup no matter what.
So this patchset will improve the situation by:
- Move em_cached into bio_ctrl
Since the lifespan of them is the same, it's a perfect match.
- Make data write path to use bio_ctrl::em_cached
Unfortunately since my last relocation, I no longer have any dedicated
storage attached to my VMs (my laptop only has one NVME slot, and my main
workhorse aarch64 board only has one NVME attached either).
So no benchmark yet, and any extra benchmark would be very appreciated.
Qu Wenruo (2):
btrfs: introduce extent_map::em_cached member
btrfs: utilize cached extent map for data writeback
fs/btrfs/extent_io.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] btrfs: introduce extent_map::em_cached member
2024-08-08 6:05 [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Qu Wenruo
@ 2024-08-08 6:05 ` Qu Wenruo
2024-08-08 11:36 ` Filipe Manana
2024-08-08 6:06 ` [PATCH 2/2] btrfs: utilize cached extent map for data writeback Qu Wenruo
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2024-08-08 6:05 UTC (permalink / raw)
To: linux-btrfs
For data read, we always pass an extent_map pointer as @em_cached for
btrfs_readpage().
And that @em_cached pointer has the same lifespan as the @bio_ctrl we
passed in, so it's a perfect match to move @em_cached into @bio_ctrl.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/extent_io.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 822e2bf8bc99..d4ad98488c03 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -101,6 +101,8 @@ struct btrfs_bio_ctrl {
blk_opf_t opf;
btrfs_bio_end_io_t end_io_func;
struct writeback_control *wbc;
+ /* For read/write extent map cache. */
+ struct extent_map *em_cached;
};
static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
@@ -1003,8 +1005,8 @@ static struct extent_map *__get_extent_map(struct inode *inode,
* XXX JDM: This needs looking at to ensure proper page locking
* return 0 on success, otherwise return error
*/
-static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
- struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start)
+static int btrfs_do_readpage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl,
+ u64 *prev_em_start)
{
struct inode *inode = folio->mapping->host;
struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
@@ -1052,7 +1054,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
break;
}
em = __get_extent_map(inode, folio, cur, end - cur + 1,
- em_cached);
+ &bio_ctrl->em_cached);
if (IS_ERR(em)) {
unlock_extent(tree, cur, end, NULL);
end_folio_read(folio, false, cur, end + 1 - cur);
@@ -1160,13 +1162,12 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
u64 start = folio_pos(folio);
u64 end = start + folio_size(folio) - 1;
struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
- struct extent_map *em_cached = NULL;
int ret;
btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
- ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
- free_extent_map(em_cached);
+ ret = btrfs_do_readpage(folio, &bio_ctrl, NULL);
+ free_extent_map(bio_ctrl.em_cached);
/*
* If btrfs_do_readpage() failed we will want to submit the assembled
@@ -2349,16 +2350,14 @@ void btrfs_readahead(struct readahead_control *rac)
struct folio *folio;
u64 start = readahead_pos(rac);
u64 end = start + readahead_length(rac) - 1;
- struct extent_map *em_cached = NULL;
u64 prev_em_start = (u64)-1;
btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
while ((folio = readahead_folio(rac)) != NULL)
- btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
+ btrfs_do_readpage(folio, &bio_ctrl, &prev_em_start);
- if (em_cached)
- free_extent_map(em_cached);
+ free_extent_map(bio_ctrl.em_cached);
submit_one_bio(&bio_ctrl);
}
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/2] btrfs: utilize cached extent map for data writeback
2024-08-08 6:05 [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Qu Wenruo
2024-08-08 6:05 ` [PATCH 1/2] btrfs: introduce extent_map::em_cached member Qu Wenruo
@ 2024-08-08 6:06 ` Qu Wenruo
2024-08-08 11:39 ` Filipe Manana
2024-08-08 10:16 ` [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Filipe Manana
2024-08-29 18:00 ` David Sterba
3 siblings, 1 reply; 9+ messages in thread
From: Qu Wenruo @ 2024-08-08 6:06 UTC (permalink / raw)
To: linux-btrfs
Unlike data read, we never really utilize the cached extent_map for data
write.
This means we will do the costly extent map lookup for each sector we
need to write back.
Change it to utilize the same function, __get_extent_map(), just like
the data read path, to reduce the overhead of extent map lookup.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/extent_io.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d4ad98488c03..9492cd9d4f04 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1357,7 +1357,8 @@ static int submit_one_sector(struct btrfs_inode *inode,
/* @filepos >= i_size case should be handled by the caller. */
ASSERT(filepos < i_size);
- em = btrfs_get_extent(inode, NULL, filepos, sectorsize);
+ em = __get_extent_map(&inode->vfs_inode, NULL, filepos, sectorsize,
+ &bio_ctrl->em_cached);
if (IS_ERR(em))
return PTR_ERR_OR_ZERO(em);
@@ -2320,6 +2321,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
cur = cur_end + 1;
}
+ free_extent_map(bio_ctrl.em_cached);
submit_write_bio(&bio_ctrl, found_error ? ret : 0);
}
@@ -2338,6 +2340,7 @@ int btrfs_writepages(struct address_space *mapping, struct writeback_control *wb
*/
btrfs_zoned_data_reloc_lock(BTRFS_I(inode));
ret = extent_write_cache_pages(mapping, &bio_ctrl);
+ free_extent_map(bio_ctrl.em_cached);
submit_write_bio(&bio_ctrl, ret);
btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
return ret;
--
2.45.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write
2024-08-08 6:05 [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Qu Wenruo
2024-08-08 6:05 ` [PATCH 1/2] btrfs: introduce extent_map::em_cached member Qu Wenruo
2024-08-08 6:06 ` [PATCH 2/2] btrfs: utilize cached extent map for data writeback Qu Wenruo
@ 2024-08-08 10:16 ` Filipe Manana
2024-08-29 18:00 ` David Sterba
3 siblings, 0 replies; 9+ messages in thread
From: Filipe Manana @ 2024-08-08 10:16 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Aug 8, 2024 at 7:06 AM Qu Wenruo <wqu@suse.com> wrote:
>
> Unlike data read path, which use cached extent map to reduce overhead,
> data write path always do the extent map lookup no matter what.
>
> So this patchset will improve the situation by:
>
> - Move em_cached into bio_ctrl
> Since the lifespan of them is the same, it's a perfect match.
>
> - Make data write path to use bio_ctrl::em_cached
>
> Unfortunately since my last relocation, I no longer have any dedicated
> storage attached to my VMs (my laptop only has one NVME slot, and my main
> workhorse aarch64 board only has one NVME attached either).
>
> So no benchmark yet, and any extra benchmark would be very appreciated.
You don't need to test with dedicated NVME or any dedicated hardware.
Use a VM and use a null block device.
Make the test scenario doing a 1M write into a file that has a few
thousand extent maps loaded in the inode's io tree, then measure the
total time taken by submit_one_sector() calls with a bpftrace script.
>
> Qu Wenruo (2):
> btrfs: introduce extent_map::em_cached member
The subject doesn't match the change, should be btrfs_bio_ctrl::em_cached.
The name is also odd, "cached_em" would be more correct, but given
it's inside a context structure, just "em" would be fine.
> btrfs: utilize cached extent map for data writeback
>
> fs/btrfs/extent_io.c | 24 +++++++++++++-----------
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] btrfs: introduce extent_map::em_cached member
2024-08-08 6:05 ` [PATCH 1/2] btrfs: introduce extent_map::em_cached member Qu Wenruo
@ 2024-08-08 11:36 ` Filipe Manana
2024-08-08 23:13 ` Qu Wenruo
0 siblings, 1 reply; 9+ messages in thread
From: Filipe Manana @ 2024-08-08 11:36 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Aug 8, 2024 at 7:06 AM Qu Wenruo <wqu@suse.com> wrote:
>
> For data read, we always pass an extent_map pointer as @em_cached for
> btrfs_readpage().
> And that @em_cached pointer has the same lifespan as the @bio_ctrl we
> passed in, so it's a perfect match to move @em_cached into @bio_ctrl.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/extent_io.c | 19 +++++++++----------
> 1 file changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 822e2bf8bc99..d4ad98488c03 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -101,6 +101,8 @@ struct btrfs_bio_ctrl {
> blk_opf_t opf;
> btrfs_bio_end_io_t end_io_func;
> struct writeback_control *wbc;
> + /* For read/write extent map cache. */
> + struct extent_map *em_cached;
As mentioned before, this can be named just "em", it's clear enough
given the comment and the fact the structure is contextual.
The naming "em_cached" is odd, and would be more logical as
"cached_em" if it were outside the structure.
> };
>
> static void submit_one_bio(struct btrfs_bio_ctrl *bio_ctrl)
> @@ -1003,8 +1005,8 @@ static struct extent_map *__get_extent_map(struct inode *inode,
> * XXX JDM: This needs looking at to ensure proper page locking
> * return 0 on success, otherwise return error
> */
> -static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
> - struct btrfs_bio_ctrl *bio_ctrl, u64 *prev_em_start)
> +static int btrfs_do_readpage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl,
> + u64 *prev_em_start)
> {
> struct inode *inode = folio->mapping->host;
> struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
> @@ -1052,7 +1054,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
> break;
> }
> em = __get_extent_map(inode, folio, cur, end - cur + 1,
> - em_cached);
> + &bio_ctrl->em_cached);
> if (IS_ERR(em)) {
> unlock_extent(tree, cur, end, NULL);
> end_folio_read(folio, false, cur, end + 1 - cur);
> @@ -1160,13 +1162,12 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
> u64 start = folio_pos(folio);
> u64 end = start + folio_size(folio) - 1;
> struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
> - struct extent_map *em_cached = NULL;
> int ret;
>
> btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
>
> - ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
> - free_extent_map(em_cached);
> + ret = btrfs_do_readpage(folio, &bio_ctrl, NULL);
> + free_extent_map(bio_ctrl.em_cached);
>
> /*
> * If btrfs_do_readpage() failed we will want to submit the assembled
> @@ -2349,16 +2350,14 @@ void btrfs_readahead(struct readahead_control *rac)
> struct folio *folio;
> u64 start = readahead_pos(rac);
> u64 end = start + readahead_length(rac) - 1;
> - struct extent_map *em_cached = NULL;
> u64 prev_em_start = (u64)-1;
>
> btrfs_lock_and_flush_ordered_range(inode, start, end, NULL);
>
> while ((folio = readahead_folio(rac)) != NULL)
> - btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
> + btrfs_do_readpage(folio, &bio_ctrl, &prev_em_start);
>
> - if (em_cached)
> - free_extent_map(em_cached);
> + free_extent_map(bio_ctrl.em_cached);
So instead of calling free_extent_map() before each submit_one_bio()
call, it would be better to do the call inside submit_one_bio() and
then set bio_ctrl->em_cached to NULL there,
to avoid any use-after-free surprises and leaks in case someone later
forgets to call free_extent_map(). Also removes duplicated code.
Also, as mentioned before, the subject of the patch is incorrect, it
should mention btrfs_bio_ctrl:: instead of extent_map::
Thanks.
> submit_one_bio(&bio_ctrl);
> }
>
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2] btrfs: utilize cached extent map for data writeback
2024-08-08 6:06 ` [PATCH 2/2] btrfs: utilize cached extent map for data writeback Qu Wenruo
@ 2024-08-08 11:39 ` Filipe Manana
0 siblings, 0 replies; 9+ messages in thread
From: Filipe Manana @ 2024-08-08 11:39 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Aug 8, 2024 at 7:06 AM Qu Wenruo <wqu@suse.com> wrote:
>
> Unlike data read, we never really utilize the cached extent_map for data
> write.
>
> This means we will do the costly extent map lookup for each sector we
> need to write back.
>
> Change it to utilize the same function, __get_extent_map(), just like
> the data read path, to reduce the overhead of extent map lookup.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/extent_io.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index d4ad98488c03..9492cd9d4f04 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -1357,7 +1357,8 @@ static int submit_one_sector(struct btrfs_inode *inode,
> /* @filepos >= i_size case should be handled by the caller. */
> ASSERT(filepos < i_size);
>
> - em = btrfs_get_extent(inode, NULL, filepos, sectorsize);
> + em = __get_extent_map(&inode->vfs_inode, NULL, filepos, sectorsize,
> + &bio_ctrl->em_cached);
> if (IS_ERR(em))
> return PTR_ERR_OR_ZERO(em);
>
> @@ -2320,6 +2321,7 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
> cur = cur_end + 1;
> }
>
> + free_extent_map(bio_ctrl.em_cached);
> submit_write_bio(&bio_ctrl, found_error ? ret : 0);
> }
>
> @@ -2338,6 +2340,7 @@ int btrfs_writepages(struct address_space *mapping, struct writeback_control *wb
> */
> btrfs_zoned_data_reloc_lock(BTRFS_I(inode));
> ret = extent_write_cache_pages(mapping, &bio_ctrl);
> + free_extent_map(bio_ctrl.em_cached);
> submit_write_bio(&bio_ctrl, ret);
Same comment as in the first patch, instead of duplicating
free_extent_map() calls before submit_write_bio() calls, just move the
free_extent_map() to inside of submit_write_bio() and
then also set ->em_cached to NULL after the free. This helps avoid
use-after-free issues and someone forgetting a free call in the
future.
Thanks.
> btrfs_zoned_data_reloc_unlock(BTRFS_I(inode));
> return ret;
> --
> 2.45.2
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2] btrfs: introduce extent_map::em_cached member
2024-08-08 11:36 ` Filipe Manana
@ 2024-08-08 23:13 ` Qu Wenruo
0 siblings, 0 replies; 9+ messages in thread
From: Qu Wenruo @ 2024-08-08 23:13 UTC (permalink / raw)
To: Filipe Manana, Qu Wenruo; +Cc: linux-btrfs
在 2024/8/8 21:06, Filipe Manana 写道:
> On Thu, Aug 8, 2024 at 7:06 AM Qu Wenruo <wqu@suse.com> wrote:
>>
>> For data read, we always pass an extent_map pointer as @em_cached for
>> btrfs_readpage().
>> And that @em_cached pointer has the same lifespan as the @bio_ctrl we
>> passed in, so it's a perfect match to move @em_cached into @bio_ctrl.
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>> ---
>> fs/btrfs/extent_io.c | 19 +++++++++----------
>> 1 file changed, 9 insertions(+), 10 deletions(-)
>>
>> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
>> index 822e2bf8bc99..d4ad98488c03 100644
>> --- a/fs/btrfs/extent_io.c
>> +++ b/fs/btrfs/extent_io.c
>> @@ -101,6 +101,8 @@ struct btrfs_bio_ctrl {
>> blk_opf_t opf;
>> btrfs_bio_end_io_t end_io_func;
>> struct writeback_control *wbc;
>> + /* For read/write extent map cache. */
>> + struct extent_map *em_cached;
>
> As mentioned before, this can be named just "em", it's clear enough
> given the comment and the fact the structure is contextual.
> The naming "em_cached" is odd, and would be more logical as
> "cached_em" if it were outside the structure.
>
Definitely will fix the naming and subjective line.
[...]
>
> So instead of calling free_extent_map() before each submit_one_bio()
> call, it would be better to do the call inside submit_one_bio() and
> then set bio_ctrl->em_cached to NULL there,
> to avoid any use-after-free surprises and leaks in case someone later
> forgets to call free_extent_map(). Also removes duplicated code.
This is not that straightforward. E.g.
|<- em 1 ->|<- em 2 ->|
| 1 | 2 | 3 | 4 | <- Pages
For page 1, we got em1, and now bio_ctrl->em = em1, add the page into
the bio_ctrl->bio.
For page 2, the same, we just reuse bio_ctrl->em, add the page into
bio_ctrl->bio.
For page 3, we find bio_ctrl->em no longer matches our range, so we get
the new em2 and stored it into bio_ctrl->em, so far so good.
But at submit_extent_folio(), we found page 3 is not contig for
bio_ctrl->bio, thus we should submit the bio.
Remember the bio_ctrl->em is now em2, if we clear it right now, the next
page, page 4 will still need to do an em lookup.
So unfortunately we can not free the cached em at submit_one_bio(), as
at that timing, our cached em can be switched to the next range.
This is not a huge deal, as we're only doing at most 2 em lookups per
dirty range, but it's still not the ideal 1 em lookup of the existing code.
And considering there are only 4 call sites allocating an on-stack
bio_ctrl, the explicit free_extent_map() call is not that a big deal.
Thanks,
Qu
>
> Also, as mentioned before, the subject of the patch is incorrect, it
> should mention btrfs_bio_ctrl:: instead of extent_map::
>
> Thanks.
>
>> submit_one_bio(&bio_ctrl);
>> }
>
>>
>> --
>> 2.45.2
>>
>>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write
2024-08-08 6:05 [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Qu Wenruo
` (2 preceding siblings ...)
2024-08-08 10:16 ` [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Filipe Manana
@ 2024-08-29 18:00 ` David Sterba
2024-08-29 21:40 ` Qu Wenruo
3 siblings, 1 reply; 9+ messages in thread
From: David Sterba @ 2024-08-29 18:00 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Aug 08, 2024 at 03:35:58PM +0930, Qu Wenruo wrote:
> Unlike data read path, which use cached extent map to reduce overhead,
> data write path always do the extent map lookup no matter what.
>
> So this patchset will improve the situation by:
>
> - Move em_cached into bio_ctrl
> Since the lifespan of them is the same, it's a perfect match.
>
> - Make data write path to use bio_ctrl::em_cached
>
> Unfortunately since my last relocation, I no longer have any dedicated
> storage attached to my VMs (my laptop only has one NVME slot, and my main
> workhorse aarch64 board only has one NVME attached either).
>
> So no benchmark yet, and any extra benchmark would be very appreciated.
>
> Qu Wenruo (2):
> btrfs: introduce extent_map::em_cached member
> btrfs: utilize cached extent map for data writeback
This looks like a good optimization, we're approaching code freeze (next
week) so it would be good to get it merged by then. As it's an
optimization we can also postpone it if you're busy with other things.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write
2024-08-29 18:00 ` David Sterba
@ 2024-08-29 21:40 ` Qu Wenruo
0 siblings, 0 replies; 9+ messages in thread
From: Qu Wenruo @ 2024-08-29 21:40 UTC (permalink / raw)
To: dsterba, Qu Wenruo; +Cc: linux-btrfs
在 2024/8/30 03:30, David Sterba 写道:
> On Thu, Aug 08, 2024 at 03:35:58PM +0930, Qu Wenruo wrote:
>> Unlike data read path, which use cached extent map to reduce overhead,
>> data write path always do the extent map lookup no matter what.
>>
>> So this patchset will improve the situation by:
>>
>> - Move em_cached into bio_ctrl
>> Since the lifespan of them is the same, it's a perfect match.
>>
>> - Make data write path to use bio_ctrl::em_cached
>>
>> Unfortunately since my last relocation, I no longer have any dedicated
>> storage attached to my VMs (my laptop only has one NVME slot, and my main
>> workhorse aarch64 board only has one NVME attached either).
>>
>> So no benchmark yet, and any extra benchmark would be very appreciated.
>>
>> Qu Wenruo (2):
>> btrfs: introduce extent_map::em_cached member
>> btrfs: utilize cached extent map for data writeback
>
> This looks like a good optimization, we're approaching code freeze (next
> week) so it would be good to get it merged by then. As it's an
> optimization we can also postpone it if you're busy with other things.
>
I'd like to postpone this one.
The biggest problem is, this is not working the last time I
micro-benchmarked the whole thing.
It turns out that, with the new code the latency to lookup an extent map
is in fact larger than the existing code.
Maybe related to the extent map shrinker which is doing a too good job.
I'll refresh this series when the em shrinker get stable and I can
re-benchmark the result to see if it's still worthy.
Thanks,
Qu
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-08-29 21:40 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-08 6:05 [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Qu Wenruo
2024-08-08 6:05 ` [PATCH 1/2] btrfs: introduce extent_map::em_cached member Qu Wenruo
2024-08-08 11:36 ` Filipe Manana
2024-08-08 23:13 ` Qu Wenruo
2024-08-08 6:06 ` [PATCH 2/2] btrfs: utilize cached extent map for data writeback Qu Wenruo
2024-08-08 11:39 ` Filipe Manana
2024-08-08 10:16 ` [PATCH 0/2] btrfs: reduce extent map lookup overhead for data write Filipe Manana
2024-08-29 18:00 ` David Sterba
2024-08-29 21:40 ` Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox